Test Report: Docker_Linux_crio 21978

                    
                      c78c82fa8bc5e05550c6fccb0bebb9cb966c725e:2025-11-24:42489
                    
                

Test fail (48/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.26
44 TestAddons/parallel/Registry 14.48
45 TestAddons/parallel/RegistryCreds 0.42
46 TestAddons/parallel/Ingress 144.42
47 TestAddons/parallel/InspektorGadget 5.25
48 TestAddons/parallel/MetricsServer 5.31
50 TestAddons/parallel/CSI 45.48
51 TestAddons/parallel/Headlamp 2.52
52 TestAddons/parallel/CloudSpanner 5.48
53 TestAddons/parallel/LocalPath 8.13
54 TestAddons/parallel/NvidiaDevicePlugin 6.25
55 TestAddons/parallel/Yakd 5.25
56 TestAddons/parallel/AmdGpuDevicePlugin 5.29
106 TestFunctional/parallel/ServiceCmdConnect 602.82
123 TestFunctional/parallel/ServiceCmd/DeployApp 600.64
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.01
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.31
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.63
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
162 TestFunctional/parallel/ServiceCmd/Format 0.54
163 TestFunctional/parallel/ServiceCmd/URL 0.54
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 602.92
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 600.61
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 0.87
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.87
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.03
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.32
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.2
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.34
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.53
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.53
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.53
294 TestJSONOutput/pause/Command 2.34
300 TestJSONOutput/unpause/Command 1.6
366 TestPause/serial/Pause 5.6
448 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.41
452 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.99
461 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.63
465 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.18
472 TestStartStop/group/old-k8s-version/serial/Pause 6.18
479 TestStartStop/group/newest-cni/serial/Pause 8.01
483 TestStartStop/group/no-preload/serial/Pause 7.58
488 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.19
489 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.57
496 TestStartStop/group/embed-certs/serial/Pause 5.84
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable volcano --alsologtostderr -v=1: exit status 11 (254.883804ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:30:57.755458   19051 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:30:57.755736   19051 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:30:57.755746   19051 out.go:374] Setting ErrFile to fd 2...
	I1124 08:30:57.755750   19051 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:30:57.755950   19051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:30:57.756202   19051 mustload.go:66] Loading cluster: addons-962100
	I1124 08:30:57.756535   19051 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:30:57.756551   19051 addons.go:622] checking whether the cluster is paused
	I1124 08:30:57.756634   19051 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:30:57.756649   19051 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:30:57.756990   19051 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:30:57.775475   19051 ssh_runner.go:195] Run: systemctl --version
	I1124 08:30:57.775534   19051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:30:57.792764   19051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:30:57.893090   19051 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:30:57.893190   19051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:30:57.921518   19051 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:30:57.921539   19051 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:30:57.921545   19051 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:30:57.921550   19051 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:30:57.921554   19051 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:30:57.921571   19051 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:30:57.921575   19051 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:30:57.921580   19051 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:30:57.921585   19051 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:30:57.921595   19051 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:30:57.921604   19051 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:30:57.921609   19051 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:30:57.921617   19051 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:30:57.921622   19051 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:30:57.921630   19051 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:30:57.921653   19051 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:30:57.921661   19051 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:30:57.921666   19051 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:30:57.921669   19051 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:30:57.921674   19051 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:30:57.921678   19051 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:30:57.921685   19051 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:30:57.921690   19051 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:30:57.921698   19051 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:30:57.921703   19051 cri.go:89] found id: ""
	I1124 08:30:57.921748   19051 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:30:57.936456   19051 out.go:203] 
	W1124 08:30:57.937818   19051 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:30:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:30:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:30:57.937836   19051 out.go:285] * 
	* 
	W1124 08:30:57.940814   19051 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:30:57.942191   19051 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.195468ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002892573s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003397467s
addons_test.go:392: (dbg) Run:  kubectl --context addons-962100 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-962100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-962100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.021382668s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 ip
2025/11/24 08:31:21 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable registry --alsologtostderr -v=1: exit status 11 (247.04138ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:21.056433   21479 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:21.056713   21479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:21.056723   21479 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:21.056727   21479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:21.056918   21479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:21.057159   21479 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:21.057519   21479 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:21.057537   21479 addons.go:622] checking whether the cluster is paused
	I1124 08:31:21.057623   21479 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:21.057638   21479 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:21.058023   21479 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:21.076028   21479 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:21.076094   21479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:21.093170   21479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:21.193714   21479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:21.193794   21479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:21.222552   21479 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:21.222574   21479 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:21.222579   21479 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:21.222584   21479 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:21.222587   21479 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:21.222591   21479 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:21.222595   21479 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:21.222599   21479 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:21.222604   21479 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:21.222621   21479 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:21.222633   21479 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:21.222637   21479 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:21.222640   21479 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:21.222643   21479 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:21.222645   21479 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:21.222651   21479 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:21.222654   21479 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:21.222658   21479 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:21.222661   21479 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:21.222664   21479 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:21.222667   21479 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:21.222669   21479 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:21.222672   21479 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:21.222680   21479 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:21.222686   21479 cri.go:89] found id: ""
	I1124 08:31:21.222735   21479 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:21.239909   21479 out.go:203] 
	W1124 08:31:21.241025   21479 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:21.241042   21479 out.go:285] * 
	* 
	W1124 08:31:21.244023   21479 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:21.245652   21479 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.48s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.693613ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-962100
addons_test.go:332: (dbg) Run:  kubectl --context addons-962100 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (248.889714ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:20.061097   21243 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:20.061437   21243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:20.061448   21243 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:20.061451   21243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:20.061653   21243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:20.061902   21243 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:20.062237   21243 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:20.062257   21243 addons.go:622] checking whether the cluster is paused
	I1124 08:31:20.062384   21243 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:20.062397   21243 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:20.062760   21243 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:20.080628   21243 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:20.080686   21243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:20.101071   21243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:20.201912   21243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:20.201985   21243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:20.231866   21243 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:20.231889   21243 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:20.231896   21243 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:20.231900   21243 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:20.231905   21243 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:20.231910   21243 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:20.231915   21243 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:20.231920   21243 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:20.231924   21243 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:20.231938   21243 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:20.231946   21243 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:20.231951   21243 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:20.231958   21243 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:20.231962   21243 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:20.231967   21243 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:20.231974   21243 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:20.231983   21243 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:20.231990   21243 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:20.231995   21243 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:20.231999   21243 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:20.232003   21243 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:20.232008   21243 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:20.232012   21243 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:20.232017   21243 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:20.232026   21243 cri.go:89] found id: ""
	I1124 08:31:20.232071   21243 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:20.245957   21243 out.go:203] 
	W1124 08:31:20.247197   21243 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:20.247218   21243 out.go:285] * 
	* 
	W1124 08:31:20.250269   21243 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:20.251621   21243 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-962100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-962100 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-962100 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [594ef30f-46eb-4533-92bc-9035cb77cac7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [594ef30f-46eb-4533-92bc-9035cb77cac7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.002951798s
I1124 08:31:21.435383    9243 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.845239238s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-962100 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-962100
helpers_test.go:243: (dbg) docker inspect addons-962100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae",
	        "Created": "2025-11-24T08:29:13.070866673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11733,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T08:29:13.101040713Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae/hosts",
	        "LogPath": "/var/lib/docker/containers/69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae/69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae-json.log",
	        "Name": "/addons-962100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-962100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-962100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae",
	                "LowerDir": "/var/lib/docker/overlay2/3b6bc159b5d216320aa0ea9a875ab73a1b97dd53f7e04b3c0465272d06b240ad-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b6bc159b5d216320aa0ea9a875ab73a1b97dd53f7e04b3c0465272d06b240ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b6bc159b5d216320aa0ea9a875ab73a1b97dd53f7e04b3c0465272d06b240ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b6bc159b5d216320aa0ea9a875ab73a1b97dd53f7e04b3c0465272d06b240ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-962100",
	                "Source": "/var/lib/docker/volumes/addons-962100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-962100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-962100",
	                "name.minikube.sigs.k8s.io": "addons-962100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "52626c57671cca89feb89fd2332b96fcc59f3db3a4f991b66200c8653078d474",
	            "SandboxKey": "/var/run/docker/netns/52626c57671c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-962100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67fc84e8838d05308c43a0142e49d7ab3d31c453db134ee2419e880ff573d4bb",
	                    "EndpointID": "7d516a0f48abc5ae6c58eb478fadea2b12dd061df0deaad9b6c9f3aa20f61609",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "0a:90:e2:38:98:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-962100",
	                        "69fc512320a4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-962100 -n addons-962100
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-962100 logs -n 25: (1.13522491s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-438068 --alsologtostderr --binary-mirror http://127.0.0.1:38621 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-438068 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ -p binary-mirror-438068                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-438068 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ addons  │ disable dashboard -p addons-962100                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-962100                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ start   │ -p addons-962100 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:30 UTC │
	│ addons  │ addons-962100 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-962100 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-962100 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-962100 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-962100 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-962100 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-962100 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-962100 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-962100 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-962100                                                                                                                                                                                                                                                                                                                                                                                           │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │ 24 Nov 25 08:31 UTC │
	│ addons  │ addons-962100 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ ip      │ addons-962100 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │ 24 Nov 25 08:31 UTC │
	│ addons  │ addons-962100 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ ssh     │ addons-962100 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-962100 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ ssh     │ addons-962100 ssh cat /opt/local-path-provisioner/pvc-ce2de511-5f70-4830-81a6-055c004c75bd_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │ 24 Nov 25 08:31 UTC │
	│ addons  │ addons-962100 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-962100 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-962100 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │                     │
	│ ip      │ addons-962100 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-962100        │ jenkins │ v1.37.0 │ 24 Nov 25 08:33 UTC │ 24 Nov 25 08:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:28:49
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:28:49.204741   11068 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:28:49.204852   11068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:49.204861   11068 out.go:374] Setting ErrFile to fd 2...
	I1124 08:28:49.204865   11068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:49.205067   11068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:28:49.205569   11068 out.go:368] Setting JSON to false
	I1124 08:28:49.206352   11068 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":675,"bootTime":1763972254,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:28:49.206408   11068 start.go:143] virtualization: kvm guest
	I1124 08:28:49.208155   11068 out.go:179] * [addons-962100] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:28:49.209307   11068 notify.go:221] Checking for updates...
	I1124 08:28:49.209355   11068 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:28:49.210535   11068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:28:49.211732   11068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:28:49.213040   11068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:28:49.214183   11068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:28:49.215280   11068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:28:49.216584   11068 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:28:49.239578   11068 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:28:49.239680   11068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:28:49.294796   11068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 08:28:49.285552794 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:28:49.294935   11068 docker.go:319] overlay module found
	I1124 08:28:49.296704   11068 out.go:179] * Using the docker driver based on user configuration
	I1124 08:28:49.297781   11068 start.go:309] selected driver: docker
	I1124 08:28:49.297794   11068 start.go:927] validating driver "docker" against <nil>
	I1124 08:28:49.297806   11068 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:28:49.298497   11068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:28:49.350596   11068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 08:28:49.340950974 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:28:49.350764   11068 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:28:49.350945   11068 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 08:28:49.352421   11068 out.go:179] * Using Docker driver with root privileges
	I1124 08:28:49.353624   11068 cni.go:84] Creating CNI manager for ""
	I1124 08:28:49.353684   11068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 08:28:49.353694   11068 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 08:28:49.353742   11068 start.go:353] cluster config:
	{Name:addons-962100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-962100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 08:28:49.355071   11068 out.go:179] * Starting "addons-962100" primary control-plane node in "addons-962100" cluster
	I1124 08:28:49.356085   11068 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 08:28:49.357236   11068 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 08:28:49.358394   11068 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:28:49.358432   11068 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 08:28:49.358440   11068 cache.go:65] Caching tarball of preloaded images
	I1124 08:28:49.358481   11068 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 08:28:49.358539   11068 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 08:28:49.358554   11068 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 08:28:49.358937   11068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/config.json ...
	I1124 08:28:49.358962   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/config.json: {Name:mkdc3c22d4d70a34b7b204e8d62eedb63621a714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:28:49.374881   11068 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 08:28:49.374990   11068 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 08:28:49.375012   11068 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 08:28:49.375021   11068 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 08:28:49.375030   11068 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 08:28:49.375037   11068 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1124 08:29:02.174472   11068 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1124 08:29:02.174506   11068 cache.go:243] Successfully downloaded all kic artifacts
	I1124 08:29:02.174553   11068 start.go:360] acquireMachinesLock for addons-962100: {Name:mk3e2d5d356e4c2edfb09ca9395f801263e4cc51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:02.174656   11068 start.go:364] duration metric: took 81.326µs to acquireMachinesLock for "addons-962100"
	I1124 08:29:02.174684   11068 start.go:93] Provisioning new machine with config: &{Name:addons-962100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-962100 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 08:29:02.174801   11068 start.go:125] createHost starting for "" (driver="docker")
	I1124 08:29:02.176966   11068 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 08:29:02.177187   11068 start.go:159] libmachine.API.Create for "addons-962100" (driver="docker")
	I1124 08:29:02.177216   11068 client.go:173] LocalClient.Create starting
	I1124 08:29:02.177315   11068 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem
	I1124 08:29:02.296893   11068 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem
	I1124 08:29:02.400633   11068 cli_runner.go:164] Run: docker network inspect addons-962100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 08:29:02.418089   11068 cli_runner.go:211] docker network inspect addons-962100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 08:29:02.418152   11068 network_create.go:284] running [docker network inspect addons-962100] to gather additional debugging logs...
	I1124 08:29:02.418170   11068 cli_runner.go:164] Run: docker network inspect addons-962100
	W1124 08:29:02.433948   11068 cli_runner.go:211] docker network inspect addons-962100 returned with exit code 1
	I1124 08:29:02.433975   11068 network_create.go:287] error running [docker network inspect addons-962100]: docker network inspect addons-962100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-962100 not found
	I1124 08:29:02.433987   11068 network_create.go:289] output of [docker network inspect addons-962100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-962100 not found
	
	** /stderr **
	I1124 08:29:02.434079   11068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 08:29:02.450793   11068 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d24860}
	I1124 08:29:02.450825   11068 network_create.go:124] attempt to create docker network addons-962100 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 08:29:02.450864   11068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-962100 addons-962100
	I1124 08:29:02.496424   11068 network_create.go:108] docker network addons-962100 192.168.49.0/24 created
	I1124 08:29:02.496453   11068 kic.go:121] calculated static IP "192.168.49.2" for the "addons-962100" container
	I1124 08:29:02.496511   11068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 08:29:02.513514   11068 cli_runner.go:164] Run: docker volume create addons-962100 --label name.minikube.sigs.k8s.io=addons-962100 --label created_by.minikube.sigs.k8s.io=true
	I1124 08:29:02.531397   11068 oci.go:103] Successfully created a docker volume addons-962100
	I1124 08:29:02.531477   11068 cli_runner.go:164] Run: docker run --rm --name addons-962100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-962100 --entrypoint /usr/bin/test -v addons-962100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 08:29:09.170802   11068 cli_runner.go:217] Completed: docker run --rm --name addons-962100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-962100 --entrypoint /usr/bin/test -v addons-962100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (6.639289207s)
	I1124 08:29:09.170828   11068 oci.go:107] Successfully prepared a docker volume addons-962100
	I1124 08:29:09.170883   11068 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:29:09.170894   11068 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 08:29:09.170936   11068 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-962100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 08:29:12.996139   11068 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-962100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.825153774s)
	I1124 08:29:12.996167   11068 kic.go:203] duration metric: took 3.825268872s to extract preloaded images to volume ...
	W1124 08:29:12.996245   11068 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 08:29:12.996277   11068 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 08:29:12.996317   11068 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 08:29:13.055427   11068 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-962100 --name addons-962100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-962100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-962100 --network addons-962100 --ip 192.168.49.2 --volume addons-962100:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 08:29:13.364108   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Running}}
	I1124 08:29:13.383311   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:13.402899   11068 cli_runner.go:164] Run: docker exec addons-962100 stat /var/lib/dpkg/alternatives/iptables
	I1124 08:29:13.448446   11068 oci.go:144] the created container "addons-962100" has a running status.
	I1124 08:29:13.448471   11068 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa...
	I1124 08:29:13.500824   11068 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 08:29:13.529394   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:13.547314   11068 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 08:29:13.547345   11068 kic_runner.go:114] Args: [docker exec --privileged addons-962100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 08:29:13.585309   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:13.605526   11068 machine.go:94] provisionDockerMachine start ...
	I1124 08:29:13.605624   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:13.623206   11068 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:13.623453   11068 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 08:29:13.623467   11068 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 08:29:13.624700   11068 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33246->127.0.0.1:32768: read: connection reset by peer
	I1124 08:29:16.767386   11068 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-962100
	
	I1124 08:29:16.767417   11068 ubuntu.go:182] provisioning hostname "addons-962100"
	I1124 08:29:16.767487   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:16.785008   11068 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:16.785213   11068 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 08:29:16.785226   11068 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-962100 && echo "addons-962100" | sudo tee /etc/hostname
	I1124 08:29:16.934564   11068 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-962100
	
	I1124 08:29:16.934625   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:16.952129   11068 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:16.952325   11068 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 08:29:16.952368   11068 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-962100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-962100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-962100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 08:29:17.092482   11068 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 08:29:17.092515   11068 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 08:29:17.092554   11068 ubuntu.go:190] setting up certificates
	I1124 08:29:17.092574   11068 provision.go:84] configureAuth start
	I1124 08:29:17.092644   11068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-962100
	I1124 08:29:17.110404   11068 provision.go:143] copyHostCerts
	I1124 08:29:17.110461   11068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 08:29:17.110594   11068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 08:29:17.110655   11068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 08:29:17.110709   11068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.addons-962100 san=[127.0.0.1 192.168.49.2 addons-962100 localhost minikube]
	I1124 08:29:17.174216   11068 provision.go:177] copyRemoteCerts
	I1124 08:29:17.174266   11068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 08:29:17.174297   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.191029   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:17.290055   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 08:29:17.308041   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 08:29:17.324552   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 08:29:17.340497   11068 provision.go:87] duration metric: took 247.906309ms to configureAuth
	I1124 08:29:17.340520   11068 ubuntu.go:206] setting minikube options for container-runtime
	I1124 08:29:17.340680   11068 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:29:17.340767   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.356971   11068 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:17.357257   11068 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 08:29:17.357280   11068 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 08:29:17.632482   11068 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 08:29:17.632506   11068 machine.go:97] duration metric: took 4.026955886s to provisionDockerMachine
	I1124 08:29:17.632518   11068 client.go:176] duration metric: took 15.455291206s to LocalClient.Create
	I1124 08:29:17.632539   11068 start.go:167] duration metric: took 15.455351433s to libmachine.API.Create "addons-962100"
	I1124 08:29:17.632552   11068 start.go:293] postStartSetup for "addons-962100" (driver="docker")
	I1124 08:29:17.632563   11068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 08:29:17.632629   11068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 08:29:17.632673   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.650244   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:17.751671   11068 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 08:29:17.754897   11068 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 08:29:17.754924   11068 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 08:29:17.754936   11068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 08:29:17.754994   11068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 08:29:17.755025   11068 start.go:296] duration metric: took 122.467001ms for postStartSetup
	I1124 08:29:17.755300   11068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-962100
	I1124 08:29:17.773051   11068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/config.json ...
	I1124 08:29:17.773365   11068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 08:29:17.773422   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.789516   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:17.886156   11068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 08:29:17.890419   11068 start.go:128] duration metric: took 15.715606607s to createHost
	I1124 08:29:17.890444   11068 start.go:83] releasing machines lock for "addons-962100", held for 15.715774317s
	I1124 08:29:17.890504   11068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-962100
	I1124 08:29:17.907663   11068 ssh_runner.go:195] Run: cat /version.json
	I1124 08:29:17.907703   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.907759   11068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 08:29:17.907835   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.925568   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:17.925991   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:18.021269   11068 ssh_runner.go:195] Run: systemctl --version
	I1124 08:29:18.077074   11068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 08:29:18.110631   11068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 08:29:18.114932   11068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 08:29:18.114995   11068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 08:29:18.139814   11068 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 08:29:18.139839   11068 start.go:496] detecting cgroup driver to use...
	I1124 08:29:18.139866   11068 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 08:29:18.139902   11068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 08:29:18.154687   11068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 08:29:18.166169   11068 docker.go:218] disabling cri-docker service (if available) ...
	I1124 08:29:18.166211   11068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 08:29:18.180981   11068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 08:29:18.197152   11068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 08:29:18.276794   11068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 08:29:18.360249   11068 docker.go:234] disabling docker service ...
	I1124 08:29:18.360302   11068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 08:29:18.376972   11068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 08:29:18.388541   11068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 08:29:18.468512   11068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 08:29:18.548485   11068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 08:29:18.560371   11068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 08:29:18.573732   11068 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.34.2/kubeadm
	I1124 08:29:19.366089   11068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 08:29:19.366150   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.376440   11068 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 08:29:19.376494   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.384571   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.392539   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.400595   11068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 08:29:19.407978   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.415861   11068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.428137   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.436236   11068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 08:29:19.442760   11068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 08:29:19.442798   11068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 08:29:19.454128   11068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 08:29:19.461695   11068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 08:29:19.536693   11068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 08:29:19.803421   11068 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 08:29:19.803496   11068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 08:29:19.807203   11068 start.go:564] Will wait 60s for crictl version
	I1124 08:29:19.807247   11068 ssh_runner.go:195] Run: which crictl
	I1124 08:29:19.810500   11068 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 08:29:19.833967   11068 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 08:29:19.834066   11068 ssh_runner.go:195] Run: crio --version
	I1124 08:29:19.859618   11068 ssh_runner.go:195] Run: crio --version
	I1124 08:29:19.886959   11068 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 08:29:19.888122   11068 cli_runner.go:164] Run: docker network inspect addons-962100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 08:29:19.904835   11068 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 08:29:19.908631   11068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 08:29:19.918141   11068 kubeadm.go:884] updating cluster {Name:addons-962100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-962100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 08:29:19.918315   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.071372   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.218390   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.360631   11068 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:29:20.360790   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.505079   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.674248   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.815718   11068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 08:29:20.845250   11068 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 08:29:20.845271   11068 crio.go:433] Images already preloaded, skipping extraction
	I1124 08:29:20.845310   11068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 08:29:20.868135   11068 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 08:29:20.868155   11068 cache_images.go:86] Images are preloaded, skipping loading
	I1124 08:29:20.868163   11068 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1124 08:29:20.868240   11068 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-962100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-962100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 08:29:20.868298   11068 ssh_runner.go:195] Run: crio config
	I1124 08:29:20.911083   11068 cni.go:84] Creating CNI manager for ""
	I1124 08:29:20.911103   11068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 08:29:20.911118   11068 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 08:29:20.911143   11068 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-962100 NodeName:addons-962100 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 08:29:20.911250   11068 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-962100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 08:29:20.911310   11068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 08:29:20.919005   11068 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 08:29:20.919058   11068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 08:29:20.926370   11068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 08:29:20.938246   11068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 08:29:20.952585   11068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 08:29:20.964634   11068 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 08:29:20.968178   11068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 08:29:20.977806   11068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 08:29:21.055565   11068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 08:29:21.078627   11068 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100 for IP: 192.168.49.2
	I1124 08:29:21.078649   11068 certs.go:195] generating shared ca certs ...
	I1124 08:29:21.078667   11068 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.078784   11068 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 08:29:21.133399   11068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt ...
	I1124 08:29:21.133431   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt: {Name:mkec0fc3ca0f5dbe0072c3481bb90432f10f6787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.133630   11068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key ...
	I1124 08:29:21.133643   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key: {Name:mk97df4b1f29dbb411889911d17e112c712f049b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.133743   11068 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 08:29:21.315716   11068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt ...
	I1124 08:29:21.315744   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt: {Name:mk540fafc920f8ec7c9e11ac00269b1fb38df736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.315938   11068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key ...
	I1124 08:29:21.315956   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key: {Name:mk1eb8415a321de0ec27a8a4e25a4deffcde087f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.316052   11068 certs.go:257] generating profile certs ...
	I1124 08:29:21.316128   11068 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.key
	I1124 08:29:21.316145   11068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt with IP's: []
	I1124 08:29:21.421292   11068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt ...
	I1124 08:29:21.421324   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: {Name:mka27ab45669c8e04fef7a48ddac74e354a5583b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.421536   11068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.key ...
	I1124 08:29:21.421554   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.key: {Name:mk92a81950b37e61a5f798503f887e153123cd4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.421649   11068 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key.b64fbab1
	I1124 08:29:21.421673   11068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt.b64fbab1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 08:29:21.551793   11068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt.b64fbab1 ...
	I1124 08:29:21.551829   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt.b64fbab1: {Name:mk968d6aaade4ffffb613ba8d7ba168f2f3bffb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.551988   11068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key.b64fbab1 ...
	I1124 08:29:21.552001   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key.b64fbab1: {Name:mk85fb87eb5bc673a2157245b545efee57e2c140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.552066   11068 certs.go:382] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt.b64fbab1 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt
	I1124 08:29:21.552158   11068 certs.go:386] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key.b64fbab1 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key
	I1124 08:29:21.552212   11068 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.key
	I1124 08:29:21.552229   11068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.crt with IP's: []
	I1124 08:29:21.580844   11068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.crt ...
	I1124 08:29:21.580881   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.crt: {Name:mk767ceea1701aee964ef40b5686c0c967807c00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.581026   11068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.key ...
	I1124 08:29:21.581037   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.key: {Name:mk5b16a40fe0ef51d8e786498ae7b29cb6116001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.581192   11068 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 08:29:21.581225   11068 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 08:29:21.581253   11068 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 08:29:21.581277   11068 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 08:29:21.581835   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 08:29:21.599009   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 08:29:21.615285   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 08:29:21.631735   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 08:29:21.647931   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 08:29:21.664190   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 08:29:21.680344   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 08:29:21.696682   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 08:29:21.712854   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 08:29:21.731228   11068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 08:29:21.742926   11068 ssh_runner.go:195] Run: openssl version
	I1124 08:29:21.748819   11068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 08:29:21.759435   11068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 08:29:21.763072   11068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 08:29:21.763117   11068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 08:29:21.796840   11068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 08:29:21.805439   11068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 08:29:21.808934   11068 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 08:29:21.808987   11068 kubeadm.go:401] StartCluster: {Name:addons-962100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-962100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:29:21.809062   11068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:29:21.809110   11068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:29:21.835442   11068 cri.go:89] found id: ""
	I1124 08:29:21.835495   11068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 08:29:21.843113   11068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 08:29:21.850375   11068 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 08:29:21.850433   11068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 08:29:21.857622   11068 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 08:29:21.857640   11068 kubeadm.go:158] found existing configuration files:
	
	I1124 08:29:21.857671   11068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 08:29:21.864842   11068 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 08:29:21.864896   11068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 08:29:21.871713   11068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 08:29:21.878766   11068 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 08:29:21.878816   11068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 08:29:21.885568   11068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 08:29:21.892544   11068 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 08:29:21.892597   11068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 08:29:21.899265   11068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 08:29:21.906198   11068 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 08:29:21.906244   11068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 08:29:21.913130   11068 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 08:29:21.980613   11068 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 08:29:22.041829   11068 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 08:29:31.168015   11068 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 08:29:31.168063   11068 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 08:29:31.168143   11068 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 08:29:31.168234   11068 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 08:29:31.168303   11068 kubeadm.go:319] OS: Linux
	I1124 08:29:31.168380   11068 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 08:29:31.168457   11068 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 08:29:31.168521   11068 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 08:29:31.168584   11068 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 08:29:31.168647   11068 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 08:29:31.168720   11068 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 08:29:31.168784   11068 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 08:29:31.168839   11068 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 08:29:31.168943   11068 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 08:29:31.169076   11068 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 08:29:31.169205   11068 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 08:29:31.169295   11068 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 08:29:31.170708   11068 out.go:252]   - Generating certificates and keys ...
	I1124 08:29:31.170791   11068 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 08:29:31.170868   11068 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 08:29:31.170954   11068 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 08:29:31.171012   11068 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 08:29:31.171088   11068 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 08:29:31.171139   11068 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 08:29:31.171185   11068 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 08:29:31.171317   11068 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-962100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 08:29:31.171467   11068 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 08:29:31.171616   11068 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-962100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 08:29:31.171700   11068 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 08:29:31.171774   11068 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 08:29:31.171845   11068 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 08:29:31.171930   11068 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 08:29:31.172007   11068 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 08:29:31.172064   11068 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 08:29:31.172111   11068 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 08:29:31.172174   11068 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 08:29:31.172235   11068 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 08:29:31.172326   11068 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 08:29:31.172432   11068 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 08:29:31.173625   11068 out.go:252]   - Booting up control plane ...
	I1124 08:29:31.173699   11068 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 08:29:31.173767   11068 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 08:29:31.173835   11068 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 08:29:31.173946   11068 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 08:29:31.174051   11068 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 08:29:31.174191   11068 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 08:29:31.174304   11068 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 08:29:31.174371   11068 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 08:29:31.174497   11068 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 08:29:31.174608   11068 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 08:29:31.174697   11068 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.32868ms
	I1124 08:29:31.174811   11068 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 08:29:31.174920   11068 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 08:29:31.175056   11068 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 08:29:31.175171   11068 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 08:29:31.175252   11068 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.449484s
	I1124 08:29:31.175325   11068 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.192961736s
	I1124 08:29:31.175445   11068 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001276406s
	I1124 08:29:31.175543   11068 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 08:29:31.175653   11068 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 08:29:31.175705   11068 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 08:29:31.175862   11068 kubeadm.go:319] [mark-control-plane] Marking the node addons-962100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 08:29:31.175916   11068 kubeadm.go:319] [bootstrap-token] Using token: cxd9y9.cgdhsbm31ng53iju
	I1124 08:29:31.177075   11068 out.go:252]   - Configuring RBAC rules ...
	I1124 08:29:31.177163   11068 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 08:29:31.177234   11068 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 08:29:31.177391   11068 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 08:29:31.177584   11068 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 08:29:31.177690   11068 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 08:29:31.177798   11068 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 08:29:31.177975   11068 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 08:29:31.178025   11068 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 08:29:31.178098   11068 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 08:29:31.178112   11068 kubeadm.go:319] 
	I1124 08:29:31.178172   11068 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 08:29:31.178181   11068 kubeadm.go:319] 
	I1124 08:29:31.178269   11068 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 08:29:31.178284   11068 kubeadm.go:319] 
	I1124 08:29:31.178327   11068 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 08:29:31.178428   11068 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 08:29:31.178502   11068 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 08:29:31.178513   11068 kubeadm.go:319] 
	I1124 08:29:31.178578   11068 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 08:29:31.178584   11068 kubeadm.go:319] 
	I1124 08:29:31.178623   11068 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 08:29:31.178629   11068 kubeadm.go:319] 
	I1124 08:29:31.178672   11068 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 08:29:31.178741   11068 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 08:29:31.178801   11068 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 08:29:31.178806   11068 kubeadm.go:319] 
	I1124 08:29:31.178875   11068 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 08:29:31.178940   11068 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 08:29:31.178945   11068 kubeadm.go:319] 
	I1124 08:29:31.179040   11068 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cxd9y9.cgdhsbm31ng53iju \
	I1124 08:29:31.179147   11068 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 \
	I1124 08:29:31.179167   11068 kubeadm.go:319] 	--control-plane 
	I1124 08:29:31.179174   11068 kubeadm.go:319] 
	I1124 08:29:31.179249   11068 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 08:29:31.179255   11068 kubeadm.go:319] 
	I1124 08:29:31.179373   11068 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cxd9y9.cgdhsbm31ng53iju \
	I1124 08:29:31.179512   11068 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 
	I1124 08:29:31.179528   11068 cni.go:84] Creating CNI manager for ""
	I1124 08:29:31.179538   11068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 08:29:31.180916   11068 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 08:29:31.182097   11068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 08:29:31.186553   11068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 08:29:31.186569   11068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 08:29:31.199056   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 08:29:31.389470   11068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 08:29:31.389537   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-962100 minikube.k8s.io/updated_at=2025_11_24T08_29_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=addons-962100 minikube.k8s.io/primary=true
	I1124 08:29:31.389607   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:31.466818   11068 ops.go:34] apiserver oom_adj: -16
	I1124 08:29:31.466935   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:31.967874   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:32.467036   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:32.967571   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:33.467079   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:33.967316   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:34.467111   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:34.967642   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:35.467723   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:35.967683   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:36.030290   11068 kubeadm.go:1114] duration metric: took 4.640710394s to wait for elevateKubeSystemPrivileges
	I1124 08:29:36.030326   11068 kubeadm.go:403] duration metric: took 14.221343677s to StartCluster
	I1124 08:29:36.030373   11068 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:36.030496   11068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:29:36.030962   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:36.031170   11068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 08:29:36.031201   11068 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 08:29:36.031263   11068 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 08:29:36.031396   11068 addons.go:70] Setting yakd=true in profile "addons-962100"
	I1124 08:29:36.031413   11068 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-962100"
	I1124 08:29:36.031420   11068 addons.go:239] Setting addon yakd=true in "addons-962100"
	I1124 08:29:36.031429   11068 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-962100"
	I1124 08:29:36.031445   11068 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:29:36.031455   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031460   11068 addons.go:70] Setting registry-creds=true in profile "addons-962100"
	I1124 08:29:36.031476   11068 addons.go:70] Setting cloud-spanner=true in profile "addons-962100"
	I1124 08:29:36.031483   11068 addons.go:239] Setting addon registry-creds=true in "addons-962100"
	I1124 08:29:36.031488   11068 addons.go:239] Setting addon cloud-spanner=true in "addons-962100"
	I1124 08:29:36.031481   11068 addons.go:70] Setting metrics-server=true in profile "addons-962100"
	I1124 08:29:36.031504   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031503   11068 addons.go:70] Setting storage-provisioner=true in profile "addons-962100"
	I1124 08:29:36.031507   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031517   11068 addons.go:239] Setting addon storage-provisioner=true in "addons-962100"
	I1124 08:29:36.031522   11068 addons.go:239] Setting addon metrics-server=true in "addons-962100"
	I1124 08:29:36.031539   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031554   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031555   11068 addons.go:70] Setting default-storageclass=true in profile "addons-962100"
	I1124 08:29:36.031580   11068 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-962100"
	I1124 08:29:36.031893   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031967   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031978   11068 addons.go:70] Setting volcano=true in profile "addons-962100"
	I1124 08:29:36.031988   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031997   11068 addons.go:70] Setting volumesnapshots=true in profile "addons-962100"
	I1124 08:29:36.032008   11068 addons.go:70] Setting registry=true in profile "addons-962100"
	I1124 08:29:36.032013   11068 addons.go:70] Setting inspektor-gadget=true in profile "addons-962100"
	I1124 08:29:36.032026   11068 addons.go:239] Setting addon registry=true in "addons-962100"
	I1124 08:29:36.032029   11068 addons.go:239] Setting addon inspektor-gadget=true in "addons-962100"
	I1124 08:29:36.032046   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.032057   11068 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-962100"
	I1124 08:29:36.032068   11068 addons.go:70] Setting gcp-auth=true in profile "addons-962100"
	I1124 08:29:36.032087   11068 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-962100"
	I1124 08:29:36.032091   11068 mustload.go:66] Loading cluster: addons-962100
	I1124 08:29:36.032103   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.032148   11068 addons.go:70] Setting ingress=true in profile "addons-962100"
	I1124 08:29:36.032162   11068 addons.go:239] Setting addon ingress=true in "addons-962100"
	I1124 08:29:36.032189   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.032250   11068 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:29:36.032481   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.032488   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.032520   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.032639   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031967   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031467   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.034528   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.032047   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031988   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031998   11068 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-962100"
	I1124 08:29:36.035177   11068 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-962100"
	I1124 08:29:36.031494   11068 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-962100"
	I1124 08:29:36.035207   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.035218   11068 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-962100"
	I1124 08:29:36.035325   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.035489   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.035640   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.036434   11068 out.go:179] * Verifying Kubernetes components...
	I1124 08:29:36.031401   11068 addons.go:70] Setting ingress-dns=true in profile "addons-962100"
	I1124 08:29:36.036551   11068 addons.go:239] Setting addon ingress-dns=true in "addons-962100"
	I1124 08:29:36.036585   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.037071   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031991   11068 addons.go:239] Setting addon volcano=true in "addons-962100"
	I1124 08:29:36.037495   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.032002   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.032018   11068 addons.go:239] Setting addon volumesnapshots=true in "addons-962100"
	I1124 08:29:36.038270   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.038281   11068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 08:29:36.045985   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.045985   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.093253   11068 addons.go:239] Setting addon default-storageclass=true in "addons-962100"
	I1124 08:29:36.093384   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.093615   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.094027   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.098553   11068 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 08:29:36.100495   11068 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 08:29:36.101884   11068 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 08:29:36.101913   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 08:29:36.101964   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.104644   11068 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 08:29:36.105951   11068 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 08:29:36.106844   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 08:29:36.106918   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.112728   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 08:29:36.115137   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 08:29:36.116996   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 08:29:36.117562   11068 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 08:29:36.118409   11068 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 08:29:36.119511   11068 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 08:29:36.119675   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 08:29:36.119836   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.120442   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 08:29:36.121794   11068 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 08:29:36.121813   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 08:29:36.121869   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.122027   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 08:29:36.123240   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 08:29:36.123295   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 08:29:36.124404   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 08:29:36.124421   11068 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 08:29:36.124471   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.124479   11068 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 08:29:36.125539   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 08:29:36.125758   11068 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 08:29:36.125998   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 08:29:36.126232   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.137193   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 08:29:36.140190   11068 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 08:29:36.140318   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 08:29:36.140350   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 08:29:36.140435   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.141524   11068 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 08:29:36.141544   11068 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 08:29:36.141600   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.156719   11068 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 08:29:36.158121   11068 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-962100"
	I1124 08:29:36.158158   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.159029   11068 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 08:29:36.159047   11068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 08:29:36.159152   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.159806   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.165752   11068 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 08:29:36.167253   11068 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 08:29:36.167294   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 08:29:36.167399   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.169283   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.170626   11068 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 08:29:36.171652   11068 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 08:29:36.171674   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 08:29:36.171723   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	W1124 08:29:36.172274   11068 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 08:29:36.176934   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.178087   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.178572   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.194294   11068 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 08:29:36.196413   11068 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 08:29:36.196443   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 08:29:36.196503   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.197775   11068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 08:29:36.200920   11068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 08:29:36.202809   11068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 08:29:36.205436   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.209426   11068 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 08:29:36.209451   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 08:29:36.209528   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.211151   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.215042   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.221532   11068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 08:29:36.225528   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.228607   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.228668   11068 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 08:29:36.228681   11068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 08:29:36.228727   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.232133   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.246650   11068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 08:29:36.248446   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.255526   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.257790   11068 out.go:179]   - Using image docker.io/busybox:stable
	I1124 08:29:36.258945   11068 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 08:29:36.260229   11068 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 08:29:36.260289   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 08:29:36.260372   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.267412   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.267941   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	W1124 08:29:36.269950   11068 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 08:29:36.270448   11068 retry.go:31] will retry after 313.082357ms: ssh: handshake failed: EOF
	I1124 08:29:36.289469   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.362304   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 08:29:36.381628   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 08:29:36.385302   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 08:29:36.385344   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 08:29:36.401992   11068 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 08:29:36.402018   11068 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 08:29:36.407723   11068 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 08:29:36.407761   11068 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 08:29:36.417738   11068 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 08:29:36.417759   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 08:29:36.424059   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 08:29:36.424081   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 08:29:36.426609   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 08:29:36.430119   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 08:29:36.430825   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 08:29:36.437889   11068 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 08:29:36.437917   11068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 08:29:36.445727   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 08:29:36.447712   11068 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 08:29:36.447730   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 08:29:36.455562   11068 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 08:29:36.455584   11068 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 08:29:36.459797   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 08:29:36.463703   11068 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 08:29:36.463722   11068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 08:29:36.481731   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 08:29:36.481760   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 08:29:36.488935   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 08:29:36.493994   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 08:29:36.499740   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 08:29:36.503088   11068 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 08:29:36.503111   11068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 08:29:36.510766   11068 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 08:29:36.510790   11068 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 08:29:36.512559   11068 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 08:29:36.512584   11068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 08:29:36.546505   11068 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 08:29:36.546540   11068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 08:29:36.549740   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 08:29:36.549761   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 08:29:36.570514   11068 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 08:29:36.570541   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 08:29:36.578550   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 08:29:36.589199   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 08:29:36.589231   11068 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 08:29:36.609720   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 08:29:36.609771   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 08:29:36.634550   11068 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 08:29:36.636013   11068 node_ready.go:35] waiting up to 6m0s for node "addons-962100" to be "Ready" ...
	I1124 08:29:36.636687   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 08:29:36.653601   11068 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 08:29:36.653628   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 08:29:36.671562   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 08:29:36.671586   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 08:29:36.704768   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 08:29:36.747245   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 08:29:36.747266   11068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 08:29:36.802886   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 08:29:36.802906   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 08:29:36.809792   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 08:29:36.830146   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 08:29:36.830169   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 08:29:36.893578   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 08:29:36.893605   11068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 08:29:36.929067   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 08:29:37.140528   11068 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-962100" context rescaled to 1 replicas
	I1124 08:29:37.510637   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.080438247s)
	I1124 08:29:37.510730   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.079884788s)
	I1124 08:29:37.510782   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.065035763s)
	I1124 08:29:37.510861   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.051040652s)
	I1124 08:29:37.511154   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.022188905s)
	I1124 08:29:37.511220   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.01720527s)
	I1124 08:29:37.511283   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.011509055s)
	I1124 08:29:37.511307   11068 addons.go:495] Verifying addon registry=true in "addons-962100"
	I1124 08:29:37.511426   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.08479159s)
	I1124 08:29:37.511449   11068 addons.go:495] Verifying addon ingress=true in "addons-962100"
	I1124 08:29:37.511974   11068 addons.go:495] Verifying addon metrics-server=true in "addons-962100"
	I1124 08:29:37.516711   11068 out.go:179] * Verifying registry addon...
	I1124 08:29:37.516790   11068 out.go:179] * Verifying ingress addon...
	I1124 08:29:37.516826   11068 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-962100 service yakd-dashboard -n yakd-dashboard
	
	I1124 08:29:37.519116   11068 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 08:29:37.519159   11068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 08:29:37.526809   11068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 08:29:37.526836   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:37.527042   11068 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 08:29:37.527061   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:38.021705   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:38.021916   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:38.024261   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.319458307s)
	I1124 08:29:38.024309   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.214491375s)
	W1124 08:29:38.024315   11068 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 08:29:38.024360   11068 retry.go:31] will retry after 313.967072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 08:29:38.024540   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.095370317s)
	I1124 08:29:38.024566   11068 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-962100"
	I1124 08:29:38.026385   11068 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 08:29:38.028670   11068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 08:29:38.033057   11068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 08:29:38.033084   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:38.339032   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 08:29:38.522729   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:38.522869   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:38.530969   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:38.638429   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:39.021887   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:39.022035   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:39.030715   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:39.522352   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:39.522480   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:39.531271   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:40.022073   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:40.022251   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:40.031099   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:40.522425   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:40.522561   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:40.531176   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:40.639061   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:40.806583   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.467510719s)
	I1124 08:29:41.022857   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:41.022987   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:41.030883   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:41.522574   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:41.522804   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:41.531442   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:42.022240   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:42.022459   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:42.031047   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:42.522089   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:42.522314   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:42.531083   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:43.021828   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:43.021951   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:43.030615   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:43.139178   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:43.522423   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:43.522572   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:43.531407   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:43.699862   11068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 08:29:43.699924   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:43.716899   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:43.821133   11068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 08:29:43.832643   11068 addons.go:239] Setting addon gcp-auth=true in "addons-962100"
	I1124 08:29:43.832690   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:43.833012   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:43.850328   11068 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 08:29:43.850392   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:43.867451   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:43.965832   11068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 08:29:43.967079   11068 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 08:29:43.968049   11068 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 08:29:43.968069   11068 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 08:29:43.981280   11068 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 08:29:43.981300   11068 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 08:29:43.993466   11068 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 08:29:43.993502   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 08:29:44.005368   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 08:29:44.022198   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:44.022396   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:44.031596   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:44.295528   11068 addons.go:495] Verifying addon gcp-auth=true in "addons-962100"
	I1124 08:29:44.296881   11068 out.go:179] * Verifying gcp-auth addon...
	I1124 08:29:44.298908   11068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 08:29:44.301096   11068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 08:29:44.301113   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:44.521872   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:44.522002   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:44.530739   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:44.801453   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:45.022158   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:45.022316   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:45.031217   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:45.302309   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:45.522075   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:45.522153   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:45.530856   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:45.639237   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:45.801566   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:46.022274   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:46.022356   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:46.031030   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:46.301918   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:46.522316   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:46.522493   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:46.531374   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:46.802166   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:47.021789   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:47.021838   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:47.030459   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:47.302012   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:47.522862   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:47.522942   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:47.530631   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:47.801277   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:48.021790   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:48.021888   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:48.030913   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:48.139496   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:48.302012   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:48.522457   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:48.522649   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:48.531285   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:48.802069   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:49.021424   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:49.021466   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:49.031304   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:49.301270   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:49.521908   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:49.521963   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:49.530619   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:49.801356   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:50.021955   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:50.022150   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:50.031185   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:50.302035   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:50.522580   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:50.522723   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:50.531570   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:50.638979   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:50.801429   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:51.022444   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:51.022581   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:51.031413   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:51.302199   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:51.522129   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:51.522162   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:51.530999   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:51.802155   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:52.021753   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:52.021804   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:52.030720   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:52.301361   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:52.522170   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:52.522393   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:52.530776   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:52.640703   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:52.802013   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:53.022451   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:53.022635   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:53.031384   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:53.301882   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:53.522744   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:53.522793   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:53.531546   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:53.802404   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:54.022030   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:54.022136   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:54.030842   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:54.301829   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:54.522470   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:54.522588   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:54.531135   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:54.801886   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:55.022587   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:55.022624   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:55.031183   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:55.138618   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:55.301901   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:55.522869   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:55.522922   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:55.530778   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:55.801558   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:56.022171   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:56.022202   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:56.030964   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:56.301936   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:56.522372   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:56.522553   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:56.531162   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:56.801949   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:57.022276   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:57.022474   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:57.031241   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:57.302173   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:57.521768   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:57.521923   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:57.530800   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:57.639227   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:57.801568   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:58.021926   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:58.022149   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:58.030675   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:58.301595   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:58.522211   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:58.522441   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:58.531116   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:58.801754   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:59.022233   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:59.022414   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:59.031184   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:59.301919   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:59.523161   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:59.523255   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:59.531109   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:59.801780   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:00.022482   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:00.022599   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:00.031682   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:00.138872   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:00.301368   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:00.522050   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:00.522139   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:00.530997   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:00.801840   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:01.022573   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:01.022591   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:01.031607   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:01.301913   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:01.522875   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:01.522928   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:01.530808   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:01.801800   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:02.022383   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:02.022506   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:02.031495   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:02.139089   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:02.301749   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:02.522298   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:02.522433   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:02.531657   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:02.808710   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:03.022531   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:03.022590   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:03.032016   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:03.302211   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:03.521770   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:03.521878   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:03.530786   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:03.801906   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:04.022413   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:04.022569   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:04.031439   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:04.301596   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:04.522220   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:04.522452   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:04.531586   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:04.639000   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:04.801425   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:05.022060   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:05.022195   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:05.031272   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:05.302171   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:05.521634   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:05.521688   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:05.531819   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:05.801770   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:06.022454   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:06.022675   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:06.031811   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:06.301853   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:06.522589   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:06.522593   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:06.531580   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:06.639175   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:06.801526   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:07.022387   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:07.022375   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:07.031524   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:07.301420   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:07.522057   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:07.522216   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:07.530940   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:07.802046   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:08.021644   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:08.021900   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:08.030623   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:08.301595   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:08.522117   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:08.522197   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:08.531269   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:08.801505   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:09.022128   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:09.022368   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:09.031368   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:09.138883   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:09.301267   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:09.522282   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:09.522374   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:09.531120   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:09.802146   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:10.021612   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:10.021709   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:10.031904   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:10.302055   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:10.521752   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:10.521933   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:10.531041   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:10.801934   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:11.022448   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:11.022585   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:11.031701   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:11.139175   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:11.301651   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:11.522760   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:11.522763   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:11.531817   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:11.801795   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:12.022568   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:12.022641   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:12.031559   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:12.301607   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:12.522212   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:12.522384   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:12.531318   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:12.802453   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:13.022616   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:13.022791   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:13.030963   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:13.139381   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:13.301888   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:13.522718   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:13.522892   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:13.530946   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:13.802270   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:14.022114   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:14.022125   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:14.031181   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:14.301318   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:14.522015   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:14.522097   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:14.530810   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:14.801868   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:15.022884   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:15.023064   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:15.031201   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:15.302296   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:15.521874   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:15.522027   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:15.530842   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:15.639246   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:15.801682   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:16.022246   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:16.022379   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:16.031039   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:16.302014   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:16.522712   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:16.522868   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:16.530795   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:16.801590   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:17.022295   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:17.022442   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:17.031128   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:17.301861   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:17.522247   11068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 08:30:17.522267   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:17.522271   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:17.531899   11068 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 08:30:17.531924   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:17.639244   11068 node_ready.go:49] node "addons-962100" is "Ready"
	I1124 08:30:17.639281   11068 node_ready.go:38] duration metric: took 41.003238351s for node "addons-962100" to be "Ready" ...
	I1124 08:30:17.639296   11068 api_server.go:52] waiting for apiserver process to appear ...
	I1124 08:30:17.639363   11068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 08:30:17.658671   11068 api_server.go:72] duration metric: took 41.627435209s to wait for apiserver process to appear ...
	I1124 08:30:17.658700   11068 api_server.go:88] waiting for apiserver healthz status ...
	I1124 08:30:17.658724   11068 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 08:30:17.665065   11068 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 08:30:17.666098   11068 api_server.go:141] control plane version: v1.34.2
	I1124 08:30:17.666125   11068 api_server.go:131] duration metric: took 7.416605ms to wait for apiserver health ...
	I1124 08:30:17.666136   11068 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 08:30:17.671852   11068 system_pods.go:59] 20 kube-system pods found
	I1124 08:30:17.671889   11068 system_pods.go:61] "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:17.671901   11068 system_pods.go:61] "coredns-66bc5c9577-hvw7n" [dfdf69ed-2329-4942-ac69-ab1a57dd2de0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:17.671911   11068 system_pods.go:61] "csi-hostpath-attacher-0" [9d36daba-9c19-43f1-a63f-aae776027942] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:17.671924   11068 system_pods.go:61] "csi-hostpath-resizer-0" [fca87f72-b886-417e-a03c-30bf9b308ee8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 08:30:17.671931   11068 system_pods.go:61] "csi-hostpathplugin-lnrv4" [b94ccba6-c88a-4e9b-b28a-a85ebbefb419] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:17.671939   11068 system_pods.go:61] "etcd-addons-962100" [c489e5fe-67b6-4621-8142-550f2b664cc4] Running
	I1124 08:30:17.671945   11068 system_pods.go:61] "kindnet-kzhgg" [ad47b283-ac11-4c7c-a310-2017634fa058] Running
	I1124 08:30:17.671955   11068 system_pods.go:61] "kube-apiserver-addons-962100" [8948c171-6691-4ce1-a02f-b09a46ca4714] Running
	I1124 08:30:17.671963   11068 system_pods.go:61] "kube-controller-manager-addons-962100" [fecede32-df73-47bf-a85d-c8f667fb6ea2] Running
	I1124 08:30:17.671974   11068 system_pods.go:61] "kube-ingress-dns-minikube" [e1dc05fe-5e82-4ad5-8514-c37eab1b2edc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:17.671982   11068 system_pods.go:61] "kube-proxy-5hrvh" [2bc9bccf-26c6-4131-84e5-abfc1a3fed6f] Running
	I1124 08:30:17.671988   11068 system_pods.go:61] "kube-scheduler-addons-962100" [f5815bf8-d143-424e-b52a-60b2b5d4d2dd] Running
	I1124 08:30:17.671998   11068 system_pods.go:61] "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:17.672007   11068 system_pods.go:61] "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:17.672017   11068 system_pods.go:61] "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:17.672023   11068 system_pods.go:61] "registry-creds-764b6fb674-q7n9p" [b414335f-6ab1-4647-b55f-282ed73c74ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:17.672028   11068 system_pods.go:61] "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:17.672040   11068 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2fbw8" [d050ec7c-b04d-4608-8ac9-5634f110fd45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:17.672054   11068 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lls6s" [b1e23bfe-8813-4aa6-be2e-2a0c9c64e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:17.672065   11068 system_pods.go:61] "storage-provisioner" [632037fc-ac8d-4e90-a57a-dfb70a160ff6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 08:30:17.672076   11068 system_pods.go:74] duration metric: took 5.933136ms to wait for pod list to return data ...
	I1124 08:30:17.672089   11068 default_sa.go:34] waiting for default service account to be created ...
	I1124 08:30:17.674762   11068 default_sa.go:45] found service account: "default"
	I1124 08:30:17.674781   11068 default_sa.go:55] duration metric: took 2.682517ms for default service account to be created ...
	I1124 08:30:17.674791   11068 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 08:30:17.772627   11068 system_pods.go:86] 20 kube-system pods found
	I1124 08:30:17.772658   11068 system_pods.go:89] "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:17.772666   11068 system_pods.go:89] "coredns-66bc5c9577-hvw7n" [dfdf69ed-2329-4942-ac69-ab1a57dd2de0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:17.772672   11068 system_pods.go:89] "csi-hostpath-attacher-0" [9d36daba-9c19-43f1-a63f-aae776027942] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:17.772680   11068 system_pods.go:89] "csi-hostpath-resizer-0" [fca87f72-b886-417e-a03c-30bf9b308ee8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 08:30:17.772686   11068 system_pods.go:89] "csi-hostpathplugin-lnrv4" [b94ccba6-c88a-4e9b-b28a-a85ebbefb419] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:17.772691   11068 system_pods.go:89] "etcd-addons-962100" [c489e5fe-67b6-4621-8142-550f2b664cc4] Running
	I1124 08:30:17.772696   11068 system_pods.go:89] "kindnet-kzhgg" [ad47b283-ac11-4c7c-a310-2017634fa058] Running
	I1124 08:30:17.772700   11068 system_pods.go:89] "kube-apiserver-addons-962100" [8948c171-6691-4ce1-a02f-b09a46ca4714] Running
	I1124 08:30:17.772703   11068 system_pods.go:89] "kube-controller-manager-addons-962100" [fecede32-df73-47bf-a85d-c8f667fb6ea2] Running
	I1124 08:30:17.772710   11068 system_pods.go:89] "kube-ingress-dns-minikube" [e1dc05fe-5e82-4ad5-8514-c37eab1b2edc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:17.772713   11068 system_pods.go:89] "kube-proxy-5hrvh" [2bc9bccf-26c6-4131-84e5-abfc1a3fed6f] Running
	I1124 08:30:17.772717   11068 system_pods.go:89] "kube-scheduler-addons-962100" [f5815bf8-d143-424e-b52a-60b2b5d4d2dd] Running
	I1124 08:30:17.772722   11068 system_pods.go:89] "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:17.772727   11068 system_pods.go:89] "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:17.772739   11068 system_pods.go:89] "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:17.772744   11068 system_pods.go:89] "registry-creds-764b6fb674-q7n9p" [b414335f-6ab1-4647-b55f-282ed73c74ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:17.772752   11068 system_pods.go:89] "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:17.772756   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fbw8" [d050ec7c-b04d-4608-8ac9-5634f110fd45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:17.772766   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lls6s" [b1e23bfe-8813-4aa6-be2e-2a0c9c64e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:17.772771   11068 system_pods.go:89] "storage-provisioner" [632037fc-ac8d-4e90-a57a-dfb70a160ff6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 08:30:17.772786   11068 retry.go:31] will retry after 252.903354ms: missing components: kube-dns
	I1124 08:30:17.870758   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:18.022696   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:18.022933   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:18.029761   11068 system_pods.go:86] 20 kube-system pods found
	I1124 08:30:18.029792   11068 system_pods.go:89] "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:18.029805   11068 system_pods.go:89] "coredns-66bc5c9577-hvw7n" [dfdf69ed-2329-4942-ac69-ab1a57dd2de0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:18.029818   11068 system_pods.go:89] "csi-hostpath-attacher-0" [9d36daba-9c19-43f1-a63f-aae776027942] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:18.029825   11068 system_pods.go:89] "csi-hostpath-resizer-0" [fca87f72-b886-417e-a03c-30bf9b308ee8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 08:30:18.029835   11068 system_pods.go:89] "csi-hostpathplugin-lnrv4" [b94ccba6-c88a-4e9b-b28a-a85ebbefb419] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:18.029848   11068 system_pods.go:89] "etcd-addons-962100" [c489e5fe-67b6-4621-8142-550f2b664cc4] Running
	I1124 08:30:18.029856   11068 system_pods.go:89] "kindnet-kzhgg" [ad47b283-ac11-4c7c-a310-2017634fa058] Running
	I1124 08:30:18.029862   11068 system_pods.go:89] "kube-apiserver-addons-962100" [8948c171-6691-4ce1-a02f-b09a46ca4714] Running
	I1124 08:30:18.029868   11068 system_pods.go:89] "kube-controller-manager-addons-962100" [fecede32-df73-47bf-a85d-c8f667fb6ea2] Running
	I1124 08:30:18.029876   11068 system_pods.go:89] "kube-ingress-dns-minikube" [e1dc05fe-5e82-4ad5-8514-c37eab1b2edc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:18.029881   11068 system_pods.go:89] "kube-proxy-5hrvh" [2bc9bccf-26c6-4131-84e5-abfc1a3fed6f] Running
	I1124 08:30:18.029886   11068 system_pods.go:89] "kube-scheduler-addons-962100" [f5815bf8-d143-424e-b52a-60b2b5d4d2dd] Running
	I1124 08:30:18.029894   11068 system_pods.go:89] "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:18.029901   11068 system_pods.go:89] "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:18.029918   11068 system_pods.go:89] "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:18.029926   11068 system_pods.go:89] "registry-creds-764b6fb674-q7n9p" [b414335f-6ab1-4647-b55f-282ed73c74ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:18.029948   11068 system_pods.go:89] "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:18.029959   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fbw8" [d050ec7c-b04d-4608-8ac9-5634f110fd45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.029971   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lls6s" [b1e23bfe-8813-4aa6-be2e-2a0c9c64e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.029978   11068 system_pods.go:89] "storage-provisioner" [632037fc-ac8d-4e90-a57a-dfb70a160ff6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 08:30:18.029993   11068 retry.go:31] will retry after 274.696351ms: missing components: kube-dns
	I1124 08:30:18.031425   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:18.302742   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:18.309392   11068 system_pods.go:86] 20 kube-system pods found
	I1124 08:30:18.309434   11068 system_pods.go:89] "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:18.309447   11068 system_pods.go:89] "coredns-66bc5c9577-hvw7n" [dfdf69ed-2329-4942-ac69-ab1a57dd2de0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:18.309459   11068 system_pods.go:89] "csi-hostpath-attacher-0" [9d36daba-9c19-43f1-a63f-aae776027942] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:18.309467   11068 system_pods.go:89] "csi-hostpath-resizer-0" [fca87f72-b886-417e-a03c-30bf9b308ee8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 08:30:18.309477   11068 system_pods.go:89] "csi-hostpathplugin-lnrv4" [b94ccba6-c88a-4e9b-b28a-a85ebbefb419] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:18.309486   11068 system_pods.go:89] "etcd-addons-962100" [c489e5fe-67b6-4621-8142-550f2b664cc4] Running
	I1124 08:30:18.309493   11068 system_pods.go:89] "kindnet-kzhgg" [ad47b283-ac11-4c7c-a310-2017634fa058] Running
	I1124 08:30:18.309506   11068 system_pods.go:89] "kube-apiserver-addons-962100" [8948c171-6691-4ce1-a02f-b09a46ca4714] Running
	I1124 08:30:18.309512   11068 system_pods.go:89] "kube-controller-manager-addons-962100" [fecede32-df73-47bf-a85d-c8f667fb6ea2] Running
	I1124 08:30:18.309525   11068 system_pods.go:89] "kube-ingress-dns-minikube" [e1dc05fe-5e82-4ad5-8514-c37eab1b2edc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:18.309534   11068 system_pods.go:89] "kube-proxy-5hrvh" [2bc9bccf-26c6-4131-84e5-abfc1a3fed6f] Running
	I1124 08:30:18.309539   11068 system_pods.go:89] "kube-scheduler-addons-962100" [f5815bf8-d143-424e-b52a-60b2b5d4d2dd] Running
	I1124 08:30:18.309550   11068 system_pods.go:89] "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:18.309557   11068 system_pods.go:89] "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:18.309568   11068 system_pods.go:89] "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:18.309576   11068 system_pods.go:89] "registry-creds-764b6fb674-q7n9p" [b414335f-6ab1-4647-b55f-282ed73c74ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:18.309583   11068 system_pods.go:89] "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:18.309591   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fbw8" [d050ec7c-b04d-4608-8ac9-5634f110fd45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.309599   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lls6s" [b1e23bfe-8813-4aa6-be2e-2a0c9c64e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.309607   11068 system_pods.go:89] "storage-provisioner" [632037fc-ac8d-4e90-a57a-dfb70a160ff6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 08:30:18.309623   11068 retry.go:31] will retry after 299.191807ms: missing components: kube-dns
	I1124 08:30:18.525016   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:18.525296   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:18.533406   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:18.628794   11068 system_pods.go:86] 20 kube-system pods found
	I1124 08:30:18.628833   11068 system_pods.go:89] "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:18.628844   11068 system_pods.go:89] "coredns-66bc5c9577-hvw7n" [dfdf69ed-2329-4942-ac69-ab1a57dd2de0] Running
	I1124 08:30:18.628855   11068 system_pods.go:89] "csi-hostpath-attacher-0" [9d36daba-9c19-43f1-a63f-aae776027942] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:18.628863   11068 system_pods.go:89] "csi-hostpath-resizer-0" [fca87f72-b886-417e-a03c-30bf9b308ee8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 08:30:18.628883   11068 system_pods.go:89] "csi-hostpathplugin-lnrv4" [b94ccba6-c88a-4e9b-b28a-a85ebbefb419] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:18.628894   11068 system_pods.go:89] "etcd-addons-962100" [c489e5fe-67b6-4621-8142-550f2b664cc4] Running
	I1124 08:30:18.628900   11068 system_pods.go:89] "kindnet-kzhgg" [ad47b283-ac11-4c7c-a310-2017634fa058] Running
	I1124 08:30:18.628906   11068 system_pods.go:89] "kube-apiserver-addons-962100" [8948c171-6691-4ce1-a02f-b09a46ca4714] Running
	I1124 08:30:18.628911   11068 system_pods.go:89] "kube-controller-manager-addons-962100" [fecede32-df73-47bf-a85d-c8f667fb6ea2] Running
	I1124 08:30:18.628919   11068 system_pods.go:89] "kube-ingress-dns-minikube" [e1dc05fe-5e82-4ad5-8514-c37eab1b2edc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:18.628924   11068 system_pods.go:89] "kube-proxy-5hrvh" [2bc9bccf-26c6-4131-84e5-abfc1a3fed6f] Running
	I1124 08:30:18.628930   11068 system_pods.go:89] "kube-scheduler-addons-962100" [f5815bf8-d143-424e-b52a-60b2b5d4d2dd] Running
	I1124 08:30:18.628939   11068 system_pods.go:89] "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:18.628950   11068 system_pods.go:89] "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:18.628960   11068 system_pods.go:89] "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:18.628968   11068 system_pods.go:89] "registry-creds-764b6fb674-q7n9p" [b414335f-6ab1-4647-b55f-282ed73c74ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:18.628977   11068 system_pods.go:89] "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:18.628985   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fbw8" [d050ec7c-b04d-4608-8ac9-5634f110fd45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.628993   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lls6s" [b1e23bfe-8813-4aa6-be2e-2a0c9c64e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.629005   11068 system_pods.go:89] "storage-provisioner" [632037fc-ac8d-4e90-a57a-dfb70a160ff6] Running
	I1124 08:30:18.629016   11068 system_pods.go:126] duration metric: took 954.218181ms to wait for k8s-apps to be running ...
	I1124 08:30:18.629035   11068 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 08:30:18.629088   11068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 08:30:18.645214   11068 system_svc.go:56] duration metric: took 16.170582ms WaitForService to wait for kubelet
	I1124 08:30:18.645246   11068 kubeadm.go:587] duration metric: took 42.614016345s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 08:30:18.645267   11068 node_conditions.go:102] verifying NodePressure condition ...
	I1124 08:30:18.648194   11068 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 08:30:18.648226   11068 node_conditions.go:123] node cpu capacity is 8
	I1124 08:30:18.648245   11068 node_conditions.go:105] duration metric: took 2.971188ms to run NodePressure ...
	I1124 08:30:18.648260   11068 start.go:242] waiting for startup goroutines ...
	I1124 08:30:18.802071   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:19.023187   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:19.023194   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:19.031033   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:19.302999   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:19.522943   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:19.523134   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:19.624259   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:19.801561   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:20.022480   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:20.022530   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:20.031743   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:20.302547   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:20.523014   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:20.523047   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:20.532195   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:20.802102   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:21.021812   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:21.021884   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:21.031208   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:21.302710   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:21.523537   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:21.523607   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:21.533375   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:21.804974   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:22.023287   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:22.023322   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:22.031803   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:22.302851   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:22.522954   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:22.523018   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:22.531786   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:22.802429   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:23.022450   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:23.022574   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:23.032180   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:23.302958   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:23.523387   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:23.525384   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:23.533987   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:23.802605   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:24.023366   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:24.023582   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:24.033096   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:24.302739   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:24.522910   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:24.523137   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:24.532003   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:24.824629   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:25.022857   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:25.022975   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:25.032546   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:25.302230   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:25.522153   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:25.522542   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:25.531829   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:25.804491   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:26.023253   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:26.023475   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:26.031996   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:26.302006   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:26.523913   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:26.524198   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:26.532001   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:26.802881   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:27.023024   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:27.023238   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:27.031703   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:27.302712   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:27.523222   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:27.523366   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:27.533237   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:27.802588   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:28.023324   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:28.023476   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:28.032477   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:28.302194   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:28.523052   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:28.523058   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:28.531440   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:28.802605   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:29.022876   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:29.022978   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:29.031487   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:29.302682   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:29.523292   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:29.523352   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:29.531866   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:29.802765   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:30.103171   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:30.103252   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:30.103478   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:30.301545   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:30.522776   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:30.522826   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:30.532139   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:30.802191   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:31.023516   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:31.023727   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:31.033062   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:31.301729   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:31.522943   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:31.523122   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:31.531855   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:31.803000   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:32.023612   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:32.023780   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:32.031984   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:32.301601   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:32.522307   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:32.522344   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:32.531495   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:32.802240   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:33.022404   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:33.022438   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:33.031695   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:33.302327   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:33.521937   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:33.521992   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:33.531033   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:33.801855   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:34.022442   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:34.022513   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:34.031708   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:34.302170   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:34.522240   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:34.522318   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:34.531851   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:34.802426   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:35.022541   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:35.022548   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:35.031570   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:35.303355   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:35.522768   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:35.522864   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:35.532012   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:35.801866   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:36.023065   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:36.023280   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:36.031387   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:36.302018   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:36.522900   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:36.523038   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:36.530839   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:36.801374   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:37.021931   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:37.022100   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:37.031408   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:37.302482   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:37.522996   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:37.523047   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:37.532130   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:37.802116   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:38.023559   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:38.023711   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:38.032242   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:38.301741   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:38.522813   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:38.522848   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:38.531657   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:38.803180   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:39.022517   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:39.022520   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:39.031368   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:39.302293   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:39.522882   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:39.522973   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:39.531121   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:39.802075   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:40.022005   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:40.022139   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:40.031502   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:40.301996   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:40.523522   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:40.523670   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:40.531837   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:40.803146   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:41.022050   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:41.022181   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:41.031457   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:41.301789   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:41.523051   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:41.523098   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:41.533931   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:41.802039   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:42.022017   11068 kapi.go:107] duration metric: took 1m4.502855375s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 08:30:42.022041   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:42.031075   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:42.302430   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:42.522225   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:42.532445   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:42.802672   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:43.022788   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:43.032503   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:43.302378   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:43.521815   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:43.533739   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:43.801534   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:44.023204   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:44.031881   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:44.301620   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:44.523181   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:44.532514   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:44.802217   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:45.022081   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:45.031450   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:45.302735   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:45.522833   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:45.531982   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:45.801634   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:46.022733   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:46.032314   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:46.302091   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:46.521910   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:46.531153   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:46.801695   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:47.023239   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:47.031986   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:47.301703   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:47.522502   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:47.531740   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:47.802524   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:48.026392   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:48.031610   11068 kapi.go:107] duration metric: took 1m10.002939716s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 08:30:48.302167   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:48.521863   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:48.801306   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:49.118788   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:49.453774   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:49.554906   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:49.801574   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:50.022918   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:50.301683   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:50.524064   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:50.801136   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:51.022423   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:51.302414   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:51.524137   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:51.803719   11068 kapi.go:107] duration metric: took 1m7.504805994s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 08:30:51.805149   11068 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-962100 cluster.
	I1124 08:30:51.806448   11068 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 08:30:51.807371   11068 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 08:30:52.023842   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:52.523393   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:53.022458   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:53.522608   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:54.023224   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:54.522730   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:55.022711   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:55.522641   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:56.022948   11068 kapi.go:107] duration metric: took 1m18.503830292s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 08:30:56.024372   11068 out.go:179] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, registry-creds, ingress-dns, nvidia-device-plugin, inspektor-gadget, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1124 08:30:56.025546   11068 addons.go:530] duration metric: took 1m19.994291028s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin registry-creds ingress-dns nvidia-device-plugin inspektor-gadget cloud-spanner metrics-server yakd storage-provisioner-rancher default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1124 08:30:56.025583   11068 start.go:247] waiting for cluster config update ...
	I1124 08:30:56.025600   11068 start.go:256] writing updated cluster config ...
	I1124 08:30:56.025839   11068 ssh_runner.go:195] Run: rm -f paused
	I1124 08:30:56.029863   11068 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 08:30:56.032611   11068 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hvw7n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.036212   11068 pod_ready.go:94] pod "coredns-66bc5c9577-hvw7n" is "Ready"
	I1124 08:30:56.036230   11068 pod_ready.go:86] duration metric: took 3.599935ms for pod "coredns-66bc5c9577-hvw7n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.037947   11068 pod_ready.go:83] waiting for pod "etcd-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.041260   11068 pod_ready.go:94] pod "etcd-addons-962100" is "Ready"
	I1124 08:30:56.041275   11068 pod_ready.go:86] duration metric: took 3.313071ms for pod "etcd-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.042746   11068 pod_ready.go:83] waiting for pod "kube-apiserver-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.045925   11068 pod_ready.go:94] pod "kube-apiserver-addons-962100" is "Ready"
	I1124 08:30:56.045942   11068 pod_ready.go:86] duration metric: took 3.179384ms for pod "kube-apiserver-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.047526   11068 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.434170   11068 pod_ready.go:94] pod "kube-controller-manager-addons-962100" is "Ready"
	I1124 08:30:56.434195   11068 pod_ready.go:86] duration metric: took 386.652582ms for pod "kube-controller-manager-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.643897   11068 pod_ready.go:83] waiting for pod "kube-proxy-5hrvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:57.033541   11068 pod_ready.go:94] pod "kube-proxy-5hrvh" is "Ready"
	I1124 08:30:57.033565   11068 pod_ready.go:86] duration metric: took 389.639069ms for pod "kube-proxy-5hrvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:57.233479   11068 pod_ready.go:83] waiting for pod "kube-scheduler-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:57.633683   11068 pod_ready.go:94] pod "kube-scheduler-addons-962100" is "Ready"
	I1124 08:30:57.633708   11068 pod_ready.go:86] duration metric: took 400.206459ms for pod "kube-scheduler-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:57.633718   11068 pod_ready.go:40] duration metric: took 1.603834576s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 08:30:57.678387   11068 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 08:30:57.680500   11068 out.go:179] * Done! kubectl is now configured to use "addons-962100" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.712090347Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-775hv/POD" id=58151567-734e-4c3f-930d-7d38716c62cb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.712175531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.719610139Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-775hv Namespace:default ID:c92a9a7d7f6822910996a2e3b4054fbd4740b918059b2e9f328e0439eb386870 UID:f3e7ce67-2965-42e1-af71-563f35202516 NetNS:/var/run/netns/67703c08-2251-40f5-9025-72d76e74efe1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000faed80}] Aliases:map[]}"
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.719659863Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-775hv to CNI network \"kindnet\" (type=ptp)"
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.730679214Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-775hv Namespace:default ID:c92a9a7d7f6822910996a2e3b4054fbd4740b918059b2e9f328e0439eb386870 UID:f3e7ce67-2965-42e1-af71-563f35202516 NetNS:/var/run/netns/67703c08-2251-40f5-9025-72d76e74efe1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000faed80}] Aliases:map[]}"
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.730809569Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-775hv for CNI network kindnet (type=ptp)"
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.731724115Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.732499395Z" level=info msg="Ran pod sandbox c92a9a7d7f6822910996a2e3b4054fbd4740b918059b2e9f328e0439eb386870 with infra container: default/hello-world-app-5d498dc89-775hv/POD" id=58151567-734e-4c3f-930d-7d38716c62cb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.733703037Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=46c7faf7-723c-49c6-875d-1843a2ce47a1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.733827927Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=46c7faf7-723c-49c6-875d-1843a2ce47a1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.733877714Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=46c7faf7-723c-49c6-875d-1843a2ce47a1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.734581121Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=d0d799dc-9320-4480-b198-6ba149d0ef3a name=/runtime.v1.ImageService/PullImage
	Nov 24 08:33:34 addons-962100 crio[772]: time="2025-11-24T08:33:34.741450818Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.113771469Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=d0d799dc-9320-4480-b198-6ba149d0ef3a name=/runtime.v1.ImageService/PullImage
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.114428579Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8f842df6-58e0-4eff-867a-f610c62b4c10 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.116115355Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=20ce344e-0330-44dc-8d09-85b30f3baed2 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.11975563Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-775hv/hello-world-app" id=fff51a9c-3728-4f6d-b5d2-7481145b0b6f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.11989075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.125505305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.1256991Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7082f6176318af1b1efd24746e6c24bea024f87a65f76492168670ec1a49828e/merged/etc/passwd: no such file or directory"
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.125731923Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7082f6176318af1b1efd24746e6c24bea024f87a65f76492168670ec1a49828e/merged/etc/group: no such file or directory"
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.12599127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.182537406Z" level=info msg="Created container 4193c63b07c4d65edf1cf74e25e4b1f47a7de787ccbf92d377f3a964e3289a10: default/hello-world-app-5d498dc89-775hv/hello-world-app" id=fff51a9c-3728-4f6d-b5d2-7481145b0b6f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.18331947Z" level=info msg="Starting container: 4193c63b07c4d65edf1cf74e25e4b1f47a7de787ccbf92d377f3a964e3289a10" id=6034fa82-176e-4cba-89e0-68ac190f78d1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 08:33:35 addons-962100 crio[772]: time="2025-11-24T08:33:35.185457468Z" level=info msg="Started container" PID=9297 containerID=4193c63b07c4d65edf1cf74e25e4b1f47a7de787ccbf92d377f3a964e3289a10 description=default/hello-world-app-5d498dc89-775hv/hello-world-app id=6034fa82-176e-4cba-89e0-68ac190f78d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c92a9a7d7f6822910996a2e3b4054fbd4740b918059b2e9f328e0439eb386870
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	4193c63b07c4d       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   c92a9a7d7f682       hello-world-app-5d498dc89-775hv            default
	91d297154e6dd       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   9ba57602665db       registry-creds-764b6fb674-q7n9p            kube-system
	d520e4b1cc88d       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   79f1cd8913efd       nginx                                      default
	4194471ba3f85       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   39e10a4fdfab7       busybox                                    default
	dcf5971be7507       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   7b7cbae64d7a8       ingress-nginx-controller-6c8bf45fb-6jbv4   ingress-nginx
	8ef7618202431       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   fd5a17a7b82b7       gcp-auth-78565c9fb4-s884b                  gcp-auth
	0b6bed5093f7a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	4b4e46a4d1356       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	ab4c77cb74d98       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	9c0b3f7c96a76       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	f12215527d7fc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   a20861c13e13b       gadget-k7jjg                               gadget
	fe44815b0c642       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	7ebc78750ca51       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   03d4e5db7133f       registry-proxy-p4gxl                       kube-system
	b825e1d2b115c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	8bbd9f289c92c       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   28addcd28d33c       nvidia-device-plugin-daemonset-mf4wk       kube-system
	5c530c99eae1b       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   09dbd720216cc       amd-gpu-device-plugin-cs5ww                kube-system
	c9e2bec536040       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              patch                                    0                   0d68186fcd987       ingress-nginx-admission-patch-kcqn2        ingress-nginx
	f975d80052ff5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              create                                   0                   9344fd2fc0453       ingress-nginx-admission-create-xv7ps       ingress-nginx
	648f5560e5b98       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   68cc75a04bcb0       snapshot-controller-7d9fbc56b8-2fbw8       kube-system
	267fb926260fe       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   25c2d162621a1       yakd-dashboard-5ff678cb9-xvtxb             yakd-dashboard
	c0296faa52b98       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   d79856deda5dd       snapshot-controller-7d9fbc56b8-lls6s       kube-system
	6d6152975c279       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   ce4e0723716c1       csi-hostpath-attacher-0                    kube-system
	9808d8d748e86       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   4b862b0fb29d3       csi-hostpath-resizer-0                     kube-system
	e5b3bc6f75f2b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   debda6c7d872f       kube-ingress-dns-minikube                  kube-system
	d267683000cc4       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   3ea993c1fb82a       cloud-spanner-emulator-5bdddb765-qhv6q     default
	ed9045036451f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   61363cd3ab4cb       local-path-provisioner-648f6765c9-5wm5f    local-path-storage
	f40ff74c2839e       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   22b44cf6480c3       registry-6b586f9694-jtnn9                  kube-system
	f6f66d85b0739       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   ce6b67c4dbe69       metrics-server-85b7d694d7-mb5jb            kube-system
	57a5df478ca20       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   b42534c1a83bc       coredns-66bc5c9577-hvw7n                   kube-system
	dedc546cfa8d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   29204f49b78d1       storage-provisioner                        kube-system
	c2361bae81167       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago            Running             kindnet-cni                              0                   01b66c624db6d       kindnet-kzhgg                              kube-system
	c3e272d2f60e0       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             3 minutes ago            Running             kube-proxy                               0                   bed6cf1ba640b       kube-proxy-5hrvh                           kube-system
	4d0e2042b8500       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   f06bdd195677a       kube-apiserver-addons-962100               kube-system
	a0cbba27959f4       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   e4b1077fa7da2       etcd-addons-962100                         kube-system
	b758fa8074d44       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   6976303d62a23       kube-controller-manager-addons-962100      kube-system
	5d9b85005d8ee       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   643e424bd1b17       kube-scheduler-addons-962100               kube-system
	
	
	==> coredns [57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50] <==
	[INFO] 10.244.0.22:52656 - 44124 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000184368s
	[INFO] 10.244.0.22:50663 - 59964 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006314683s
	[INFO] 10.244.0.22:49770 - 56680 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006807088s
	[INFO] 10.244.0.22:40600 - 32872 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006777077s
	[INFO] 10.244.0.22:33824 - 9750 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006954044s
	[INFO] 10.244.0.22:60070 - 34083 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005269102s
	[INFO] 10.244.0.22:33260 - 51789 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005389s
	[INFO] 10.244.0.22:34673 - 21156 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001263523s
	[INFO] 10.244.0.22:60528 - 22691 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002085663s
	[INFO] 10.244.0.25:46799 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000282401s
	[INFO] 10.244.0.25:55757 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160904s
	[INFO] 10.244.0.27:51553 - 7163 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000178393s
	[INFO] 10.244.0.27:36690 - 57420 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000180739s
	[INFO] 10.244.0.27:59799 - 23858 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00010536s
	[INFO] 10.244.0.27:43331 - 16631 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000170574s
	[INFO] 10.244.0.27:47386 - 28539 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000097852s
	[INFO] 10.244.0.27:42937 - 8790 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000113286s
	[INFO] 10.244.0.27:46546 - 12880 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.005932698s
	[INFO] 10.244.0.27:49927 - 45842 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.006031553s
	[INFO] 10.244.0.27:58979 - 30734 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004350026s
	[INFO] 10.244.0.27:52053 - 46147 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005655898s
	[INFO] 10.244.0.27:58418 - 3885 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004121197s
	[INFO] 10.244.0.27:40400 - 6053 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005717032s
	[INFO] 10.244.0.27:56709 - 4424 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001758739s
	[INFO] 10.244.0.27:57389 - 2533 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002318605s
	
	
	==> describe nodes <==
	Name:               addons-962100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-962100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=addons-962100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T08_29_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-962100
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-962100"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 08:29:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-962100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 08:33:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 08:33:35 +0000   Mon, 24 Nov 2025 08:29:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 08:33:35 +0000   Mon, 24 Nov 2025 08:29:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 08:33:35 +0000   Mon, 24 Nov 2025 08:29:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 08:33:35 +0000   Mon, 24 Nov 2025 08:30:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-962100
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                8fe3bd7f-1ad1-4365-8ebc-47aaf9cc78fb
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  default                     cloud-spanner-emulator-5bdddb765-qhv6q      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  default                     hello-world-app-5d498dc89-775hv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-k7jjg                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  gcp-auth                    gcp-auth-78565c9fb4-s884b                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-6jbv4    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m59s
	  kube-system                 amd-gpu-device-plugin-cs5ww                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	  kube-system                 coredns-66bc5c9577-hvw7n                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 csi-hostpathplugin-lnrv4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	  kube-system                 etcd-addons-962100                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m6s
	  kube-system                 kindnet-kzhgg                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m1s
	  kube-system                 kube-apiserver-addons-962100                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-addons-962100       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-proxy-5hrvh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-scheduler-addons-962100                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 metrics-server-85b7d694d7-mb5jb             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m59s
	  kube-system                 nvidia-device-plugin-daemonset-mf4wk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	  kube-system                 registry-6b586f9694-jtnn9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 registry-creds-764b6fb674-q7n9p             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 registry-proxy-p4gxl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	  kube-system                 snapshot-controller-7d9fbc56b8-2fbw8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 snapshot-controller-7d9fbc56b8-lls6s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  local-path-storage          local-path-provisioner-648f6765c9-5wm5f     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-xvtxb              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m58s  kube-proxy       
	  Normal  Starting                 4m6s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m6s   kubelet          Node addons-962100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s   kubelet          Node addons-962100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s   kubelet          Node addons-962100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m2s   node-controller  Node addons-962100 event: Registered Node addons-962100 in Controller
	  Normal  NodeReady                3m19s  kubelet          Node addons-962100 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081417] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024229] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.472063] kauditd_printk_skb: 47 callbacks suppressed
	[Nov24 08:31] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.027365] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.023898] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.024840] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.022897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +4.031610] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +8.191119] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[ +16.382253] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 08:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	
	
	==> etcd [a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2] <==
	{"level":"warn","ts":"2025-11-24T08:29:27.420090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.426952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.433421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.442692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.448749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.455478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.461229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.467175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.473687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.481460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.487550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.507604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.514178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.520247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.562175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:38.562308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:38.570632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:30:04.955541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:30:04.962503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:30:04.974090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:30:04.980382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47078","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T08:30:24.673300Z","caller":"traceutil/trace.go:172","msg":"trace[504337096] transaction","detail":"{read_only:false; response_revision:982; number_of_response:1; }","duration":"133.138113ms","start":"2025-11-24T08:30:24.540146Z","end":"2025-11-24T08:30:24.673285Z","steps":["trace[504337096] 'process raft request'  (duration: 133.013116ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:30:49.117674Z","caller":"traceutil/trace.go:172","msg":"trace[1548160243] transaction","detail":"{read_only:false; response_revision:1154; number_of_response:1; }","duration":"114.314346ms","start":"2025-11-24T08:30:49.003343Z","end":"2025-11-24T08:30:49.117658Z","steps":["trace[1548160243] 'process raft request'  (duration: 114.159533ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:30:49.452412Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.038502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T08:30:49.452498Z","caller":"traceutil/trace.go:172","msg":"trace[197173428] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"151.168793ms","start":"2025-11-24T08:30:49.301314Z","end":"2025-11-24T08:30:49.452483Z","steps":["trace[197173428] 'range keys from in-memory index tree'  (duration: 150.862485ms)"],"step_count":1}
	
	
	==> gcp-auth [8ef7618202431d505d8b8ddaf32376364e37c9c96ea31b77ef7b58e16f648587] <==
	2025/11/24 08:30:51 GCP Auth Webhook started!
	2025/11/24 08:30:58 Ready to marshal response ...
	2025/11/24 08:30:58 Ready to write response ...
	2025/11/24 08:30:58 Ready to marshal response ...
	2025/11/24 08:30:58 Ready to write response ...
	2025/11/24 08:30:58 Ready to marshal response ...
	2025/11/24 08:30:58 Ready to write response ...
	2025/11/24 08:31:13 Ready to marshal response ...
	2025/11/24 08:31:13 Ready to write response ...
	2025/11/24 08:31:17 Ready to marshal response ...
	2025/11/24 08:31:17 Ready to write response ...
	2025/11/24 08:31:20 Ready to marshal response ...
	2025/11/24 08:31:20 Ready to write response ...
	2025/11/24 08:31:20 Ready to marshal response ...
	2025/11/24 08:31:20 Ready to write response ...
	2025/11/24 08:31:24 Ready to marshal response ...
	2025/11/24 08:31:24 Ready to write response ...
	2025/11/24 08:31:28 Ready to marshal response ...
	2025/11/24 08:31:28 Ready to write response ...
	2025/11/24 08:31:53 Ready to marshal response ...
	2025/11/24 08:31:53 Ready to write response ...
	2025/11/24 08:33:34 Ready to marshal response ...
	2025/11/24 08:33:34 Ready to write response ...
	
	
	==> kernel <==
	 08:33:36 up 16 min,  0 user,  load average: 0.44, 0.55, 0.28
	Linux addons-962100 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9] <==
	I1124 08:31:26.828541       1 main.go:301] handling current node
	I1124 08:31:36.828094       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:31:36.828121       1 main.go:301] handling current node
	I1124 08:31:46.832272       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:31:46.832314       1 main.go:301] handling current node
	I1124 08:31:56.827799       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:31:56.827844       1 main.go:301] handling current node
	I1124 08:32:06.829835       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:32:06.829874       1 main.go:301] handling current node
	I1124 08:32:16.827825       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:32:16.827856       1 main.go:301] handling current node
	I1124 08:32:26.828453       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:32:26.828482       1 main.go:301] handling current node
	I1124 08:32:36.828018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:32:36.828052       1 main.go:301] handling current node
	I1124 08:32:46.827789       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:32:46.827824       1 main.go:301] handling current node
	I1124 08:32:56.828560       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:32:56.828593       1 main.go:301] handling current node
	I1124 08:33:06.828140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:33:06.828167       1 main.go:301] handling current node
	I1124 08:33:16.829565       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:33:16.829598       1 main.go:301] handling current node
	I1124 08:33:26.828518       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:33:26.828548       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6] <==
	E1124 08:30:20.551573       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.214.124:443: connect: connection refused" logger="UnhandledError"
	E1124 08:30:20.557854       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.214.124:443: connect: connection refused" logger="UnhandledError"
	E1124 08:30:20.579216       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.214.124:443: connect: connection refused" logger="UnhandledError"
	W1124 08:30:21.550303       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 08:30:21.550673       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1124 08:30:21.550767       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1124 08:30:21.550614       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 08:30:21.550962       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1124 08:30:21.552244       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1124 08:30:25.630835       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 08:30:25.630892       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 08:30:25.630930       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1124 08:30:25.641113       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 08:31:06.355192       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56656: use of closed network connection
	E1124 08:31:06.498593       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56682: use of closed network connection
	I1124 08:31:13.261386       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1124 08:31:13.425743       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.163.5"}
	I1124 08:31:34.683951       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1124 08:33:34.477900       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.85.78"}
	
	
	==> kube-controller-manager [b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a] <==
	I1124 08:29:34.937682       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 08:29:34.937757       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 08:29:34.937815       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 08:29:34.937849       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 08:29:34.938144       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 08:29:34.938147       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 08:29:34.938161       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 08:29:34.938230       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 08:29:34.938277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 08:29:34.939370       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 08:29:34.939469       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 08:29:34.939487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 08:29:34.940670       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 08:29:34.942841       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 08:29:34.946094       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 08:29:34.946113       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 08:29:34.956784       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 08:30:04.950172       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 08:30:04.950282       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 08:30:04.950322       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 08:30:04.964880       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 08:30:04.968816       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 08:30:05.050732       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 08:30:05.069249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 08:30:19.893479       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829] <==
	I1124 08:29:36.375635       1 server_linux.go:53] "Using iptables proxy"
	I1124 08:29:36.479434       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 08:29:36.580514       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 08:29:36.582789       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 08:29:36.584803       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:29:36.901185       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 08:29:36.901301       1 server_linux.go:132] "Using iptables Proxier"
	I1124 08:29:37.002081       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:29:37.030383       1 server.go:527] "Version info" version="v1.34.2"
	I1124 08:29:37.033169       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:29:37.122963       1 config.go:200] "Starting service config controller"
	I1124 08:29:37.122988       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:29:37.123043       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:29:37.123048       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:29:37.123062       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:29:37.123067       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:29:37.160348       1 config.go:309] "Starting node config controller"
	I1124 08:29:37.160432       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:29:37.160444       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 08:29:37.223405       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 08:29:37.223510       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 08:29:37.223544       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609] <==
	E1124 08:29:27.948525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 08:29:27.948607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 08:29:27.948601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 08:29:27.948786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 08:29:27.948809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 08:29:27.948850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 08:29:27.948872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 08:29:27.948897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 08:29:27.949004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 08:29:27.949018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 08:29:27.949066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 08:29:27.949072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 08:29:27.949109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 08:29:27.949207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 08:29:28.797253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 08:29:28.886733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 08:29:28.912829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 08:29:28.971618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 08:29:28.989591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 08:29:29.025616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 08:29:29.091487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 08:29:29.100577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 08:29:29.123483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 08:29:29.158858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1124 08:29:31.446280       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 08:31:53 addons-962100 kubelet[1276]: I1124 08:31:53.496808    1276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0fc0a242-ee39-47b4-99f1-68a5aae6fd24-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"0fc0a242-ee39-47b4-99f1-68a5aae6fd24\") " pod="default/task-pv-pod-restore"
	Nov 24 08:31:53 addons-962100 kubelet[1276]: I1124 08:31:53.602781    1276 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-bc18902d-aac7-44de-904e-babef34725d9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0722d2fb-c910-11f0-93f2-a68cfc12ca19\") pod \"task-pv-pod-restore\" (UID: \"0fc0a242-ee39-47b4-99f1-68a5aae6fd24\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/967ac4298937128a7cc579e9f71c19afdbe86402690bafd6cc5c1020976bb933/globalmount\"" pod="default/task-pv-pod-restore"
	Nov 24 08:31:54 addons-962100 kubelet[1276]: I1124 08:31:54.964570    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.708372851 podStartE2EDuration="1.964551864s" podCreationTimestamp="2025-11-24 08:31:53 +0000 UTC" firstStartedPulling="2025-11-24 08:31:53.667085392 +0000 UTC m=+143.358277535" lastFinishedPulling="2025-11-24 08:31:53.923264409 +0000 UTC m=+143.614456548" observedRunningTime="2025-11-24 08:31:54.963683831 +0000 UTC m=+144.654875989" watchObservedRunningTime="2025-11-24 08:31:54.964551864 +0000 UTC m=+144.655744022"
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.655509    1276 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0722d2fb-c910-11f0-93f2-a68cfc12ca19\") pod \"0fc0a242-ee39-47b4-99f1-68a5aae6fd24\" (UID: \"0fc0a242-ee39-47b4-99f1-68a5aae6fd24\") "
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.655572    1276 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0fc0a242-ee39-47b4-99f1-68a5aae6fd24-gcp-creds\") pod \"0fc0a242-ee39-47b4-99f1-68a5aae6fd24\" (UID: \"0fc0a242-ee39-47b4-99f1-68a5aae6fd24\") "
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.655615    1276 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rxq5\" (UniqueName: \"kubernetes.io/projected/0fc0a242-ee39-47b4-99f1-68a5aae6fd24-kube-api-access-5rxq5\") pod \"0fc0a242-ee39-47b4-99f1-68a5aae6fd24\" (UID: \"0fc0a242-ee39-47b4-99f1-68a5aae6fd24\") "
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.655706    1276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fc0a242-ee39-47b4-99f1-68a5aae6fd24-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0fc0a242-ee39-47b4-99f1-68a5aae6fd24" (UID: "0fc0a242-ee39-47b4-99f1-68a5aae6fd24"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.657824    1276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc0a242-ee39-47b4-99f1-68a5aae6fd24-kube-api-access-5rxq5" (OuterVolumeSpecName: "kube-api-access-5rxq5") pod "0fc0a242-ee39-47b4-99f1-68a5aae6fd24" (UID: "0fc0a242-ee39-47b4-99f1-68a5aae6fd24"). InnerVolumeSpecName "kube-api-access-5rxq5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.658801    1276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^0722d2fb-c910-11f0-93f2-a68cfc12ca19" (OuterVolumeSpecName: "task-pv-storage") pod "0fc0a242-ee39-47b4-99f1-68a5aae6fd24" (UID: "0fc0a242-ee39-47b4-99f1-68a5aae6fd24"). InnerVolumeSpecName "pvc-bc18902d-aac7-44de-904e-babef34725d9". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.756164    1276 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5rxq5\" (UniqueName: \"kubernetes.io/projected/0fc0a242-ee39-47b4-99f1-68a5aae6fd24-kube-api-access-5rxq5\") on node \"addons-962100\" DevicePath \"\""
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.756219    1276 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-bc18902d-aac7-44de-904e-babef34725d9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0722d2fb-c910-11f0-93f2-a68cfc12ca19\") on node \"addons-962100\" "
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.756232    1276 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0fc0a242-ee39-47b4-99f1-68a5aae6fd24-gcp-creds\") on node \"addons-962100\" DevicePath \"\""
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.760573    1276 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-bc18902d-aac7-44de-904e-babef34725d9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^0722d2fb-c910-11f0-93f2-a68cfc12ca19") on node "addons-962100"
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.857178    1276 reconciler_common.go:299] "Volume detached for volume \"pvc-bc18902d-aac7-44de-904e-babef34725d9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0722d2fb-c910-11f0-93f2-a68cfc12ca19\") on node \"addons-962100\" DevicePath \"\""
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.978775    1276 scope.go:117] "RemoveContainer" containerID="9b1b51fa4ad20587e50a497bfeceded2f04959767d96abbead19f61d6f5a2bdb"
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.994063    1276 scope.go:117] "RemoveContainer" containerID="9b1b51fa4ad20587e50a497bfeceded2f04959767d96abbead19f61d6f5a2bdb"
	Nov 24 08:32:01 addons-962100 kubelet[1276]: E1124 08:32:01.995233    1276 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b1b51fa4ad20587e50a497bfeceded2f04959767d96abbead19f61d6f5a2bdb\": container with ID starting with 9b1b51fa4ad20587e50a497bfeceded2f04959767d96abbead19f61d6f5a2bdb not found: ID does not exist" containerID="9b1b51fa4ad20587e50a497bfeceded2f04959767d96abbead19f61d6f5a2bdb"
	Nov 24 08:32:01 addons-962100 kubelet[1276]: I1124 08:32:01.995430    1276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b1b51fa4ad20587e50a497bfeceded2f04959767d96abbead19f61d6f5a2bdb"} err="failed to get container status \"9b1b51fa4ad20587e50a497bfeceded2f04959767d96abbead19f61d6f5a2bdb\": rpc error: code = NotFound desc = could not find container \"9b1b51fa4ad20587e50a497bfeceded2f04959767d96abbead19f61d6f5a2bdb\": container with ID starting with 9b1b51fa4ad20587e50a497bfeceded2f04959767d96abbead19f61d6f5a2bdb not found: ID does not exist"
	Nov 24 08:32:02 addons-962100 kubelet[1276]: I1124 08:32:02.389710    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc0a242-ee39-47b4-99f1-68a5aae6fd24" path="/var/lib/kubelet/pods/0fc0a242-ee39-47b4-99f1-68a5aae6fd24/volumes"
	Nov 24 08:32:44 addons-962100 kubelet[1276]: I1124 08:32:44.386407    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cs5ww" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 08:33:17 addons-962100 kubelet[1276]: I1124 08:33:17.387064    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-p4gxl" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 08:33:20 addons-962100 kubelet[1276]: I1124 08:33:20.388131    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-mf4wk" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 08:33:34 addons-962100 kubelet[1276]: I1124 08:33:34.525172    1276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f3e7ce67-2965-42e1-af71-563f35202516-gcp-creds\") pod \"hello-world-app-5d498dc89-775hv\" (UID: \"f3e7ce67-2965-42e1-af71-563f35202516\") " pod="default/hello-world-app-5d498dc89-775hv"
	Nov 24 08:33:34 addons-962100 kubelet[1276]: I1124 08:33:34.525221    1276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glhlz\" (UniqueName: \"kubernetes.io/projected/f3e7ce67-2965-42e1-af71-563f35202516-kube-api-access-glhlz\") pod \"hello-world-app-5d498dc89-775hv\" (UID: \"f3e7ce67-2965-42e1-af71-563f35202516\") " pod="default/hello-world-app-5d498dc89-775hv"
	Nov 24 08:33:35 addons-962100 kubelet[1276]: I1124 08:33:35.319909    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-775hv" podStartSLOduration=0.938718119 podStartE2EDuration="1.319887135s" podCreationTimestamp="2025-11-24 08:33:34 +0000 UTC" firstStartedPulling="2025-11-24 08:33:34.734160595 +0000 UTC m=+244.425352748" lastFinishedPulling="2025-11-24 08:33:35.115329614 +0000 UTC m=+244.806521764" observedRunningTime="2025-11-24 08:33:35.318747266 +0000 UTC m=+245.009939424" watchObservedRunningTime="2025-11-24 08:33:35.319887135 +0000 UTC m=+245.011079296"
	
	
	==> storage-provisioner [dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49] <==
	W1124 08:33:10.460948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:12.463449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:12.466990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:14.469940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:14.475158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:16.478227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:16.481688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:18.485135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:18.488853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:20.491363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:20.495401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:22.498431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:22.503527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:24.506714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:24.511134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:26.514019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:26.519046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:28.522440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:28.527293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:30.529838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:30.533678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:32.536484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:32.541044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:34.544058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:33:34.549283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-962100 -n addons-962100
helpers_test.go:269: (dbg) Run:  kubectl --context addons-962100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-xv7ps ingress-nginx-admission-patch-kcqn2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-962100 describe pod ingress-nginx-admission-create-xv7ps ingress-nginx-admission-patch-kcqn2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-962100 describe pod ingress-nginx-admission-create-xv7ps ingress-nginx-admission-patch-kcqn2: exit status 1 (56.866152ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xv7ps" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kcqn2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-962100 describe pod ingress-nginx-admission-create-xv7ps ingress-nginx-admission-patch-kcqn2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (245.646335ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:33:37.011376   25260 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:33:37.011520   25260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:33:37.011529   25260 out.go:374] Setting ErrFile to fd 2...
	I1124 08:33:37.011532   25260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:33:37.011704   25260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:33:37.011961   25260 mustload.go:66] Loading cluster: addons-962100
	I1124 08:33:37.012279   25260 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:33:37.012294   25260 addons.go:622] checking whether the cluster is paused
	I1124 08:33:37.012395   25260 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:33:37.012408   25260 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:33:37.012764   25260 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:33:37.030930   25260 ssh_runner.go:195] Run: systemctl --version
	I1124 08:33:37.030982   25260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:33:37.048820   25260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:33:37.150040   25260 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:33:37.150143   25260 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:33:37.178994   25260 cri.go:89] found id: "91d297154e6dda1e2f052e15ea1a4f8f73e3907171575a40ea567f89618d4b96"
	I1124 08:33:37.179018   25260 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:33:37.179023   25260 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:33:37.179026   25260 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:33:37.179029   25260 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:33:37.179060   25260 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:33:37.179063   25260 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:33:37.179066   25260 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:33:37.179069   25260 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:33:37.179075   25260 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:33:37.179089   25260 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:33:37.179095   25260 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:33:37.179098   25260 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:33:37.179101   25260 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:33:37.179104   25260 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:33:37.179109   25260 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:33:37.179111   25260 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:33:37.179115   25260 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:33:37.179117   25260 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:33:37.179120   25260 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:33:37.179126   25260 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:33:37.179128   25260 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:33:37.179131   25260 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:33:37.179134   25260 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:33:37.179137   25260 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:33:37.179140   25260 cri.go:89] found id: ""
	I1124 08:33:37.179182   25260 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:33:37.193850   25260 out.go:203] 
	W1124 08:33:37.195229   25260 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:33:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:33:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:33:37.195248   25260 out.go:285] * 
	* 
	W1124 08:33:37.198261   25260 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:33:37.199495   25260 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable ingress --alsologtostderr -v=1: exit status 11 (242.460691ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:33:37.258061   25322 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:33:37.258372   25322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:33:37.258384   25322 out.go:374] Setting ErrFile to fd 2...
	I1124 08:33:37.258388   25322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:33:37.258569   25322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:33:37.258816   25322 mustload.go:66] Loading cluster: addons-962100
	I1124 08:33:37.259148   25322 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:33:37.259163   25322 addons.go:622] checking whether the cluster is paused
	I1124 08:33:37.259247   25322 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:33:37.259261   25322 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:33:37.259644   25322 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:33:37.277248   25322 ssh_runner.go:195] Run: systemctl --version
	I1124 08:33:37.277301   25322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:33:37.294710   25322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:33:37.395370   25322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:33:37.395443   25322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:33:37.423719   25322 cri.go:89] found id: "91d297154e6dda1e2f052e15ea1a4f8f73e3907171575a40ea567f89618d4b96"
	I1124 08:33:37.423740   25322 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:33:37.423745   25322 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:33:37.423748   25322 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:33:37.423751   25322 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:33:37.423755   25322 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:33:37.423758   25322 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:33:37.423761   25322 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:33:37.423764   25322 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:33:37.423768   25322 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:33:37.423771   25322 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:33:37.423774   25322 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:33:37.423777   25322 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:33:37.423780   25322 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:33:37.423783   25322 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:33:37.423787   25322 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:33:37.423823   25322 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:33:37.423831   25322 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:33:37.423838   25322 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:33:37.423841   25322 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:33:37.423844   25322 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:33:37.423851   25322 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:33:37.423863   25322 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:33:37.423869   25322 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:33:37.423872   25322 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:33:37.423876   25322 cri.go:89] found id: ""
	I1124 08:33:37.423922   25322 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:33:37.437514   25322 out.go:203] 
	W1124 08:33:37.438711   25322 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:33:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:33:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:33:37.438732   25322 out.go:285] * 
	* 
	W1124 08:33:37.441668   25322 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:33:37.442956   25322 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-k7jjg" [3e68c0ca-367e-4360-afae-4892684f64b5] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0038245s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (245.134851ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:19.649222   21163 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:19.649561   21163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:19.649571   21163 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:19.649577   21163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:19.649778   21163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:19.650051   21163 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:19.650410   21163 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:19.650429   21163 addons.go:622] checking whether the cluster is paused
	I1124 08:31:19.650546   21163 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:19.650568   21163 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:19.650938   21163 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:19.668486   21163 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:19.668545   21163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:19.686092   21163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:19.785905   21163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:19.785977   21163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:19.815880   21163 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:19.815897   21163 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:19.815901   21163 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:19.815905   21163 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:19.815908   21163 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:19.815911   21163 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:19.815914   21163 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:19.815916   21163 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:19.815919   21163 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:19.815930   21163 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:19.815933   21163 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:19.815936   21163 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:19.815939   21163 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:19.815942   21163 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:19.815944   21163 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:19.815954   21163 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:19.815962   21163 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:19.815966   21163 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:19.815970   21163 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:19.815973   21163 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:19.815976   21163 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:19.815978   21163 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:19.815981   21163 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:19.815983   21163 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:19.815986   21163 cri.go:89] found id: ""
	I1124 08:31:19.816019   21163 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:19.830356   21163 out.go:203] 
	W1124 08:31:19.831834   21163 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:19.831861   21163 out.go:285] * 
	* 
	W1124 08:31:19.834753   21163 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:19.835933   21163 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.111063ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002830969s
addons_test.go:463: (dbg) Run:  kubectl --context addons-962100 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (243.568068ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:11.891383   20241 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:11.891705   20241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:11.891718   20241 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:11.891724   20241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:11.891956   20241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:11.892210   20241 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:11.892561   20241 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:11.892579   20241 addons.go:622] checking whether the cluster is paused
	I1124 08:31:11.892678   20241 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:11.892699   20241 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:11.893057   20241 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:11.910856   20241 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:11.910916   20241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:11.928814   20241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:12.028962   20241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:12.029091   20241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:12.057527   20241 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:12.057552   20241 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:12.057557   20241 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:12.057560   20241 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:12.057563   20241 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:12.057566   20241 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:12.057569   20241 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:12.057572   20241 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:12.057574   20241 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:12.057581   20241 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:12.057584   20241 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:12.057586   20241 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:12.057589   20241 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:12.057592   20241 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:12.057594   20241 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:12.057601   20241 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:12.057604   20241 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:12.057608   20241 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:12.057611   20241 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:12.057614   20241 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:12.057617   20241 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:12.057620   20241 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:12.057623   20241 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:12.057625   20241 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:12.057628   20241 cri.go:89] found id: ""
	I1124 08:31:12.057663   20241 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:12.071824   20241 out.go:203] 
	W1124 08:31:12.072932   20241 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:12.072950   20241 out.go:285] * 
	* 
	W1124 08:31:12.075861   20241 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:12.077196   20241 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 08:31:17.332274    9243 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 08:31:17.334989    9243 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 08:31:17.335008    9243 kapi.go:107] duration metric: took 2.75379ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 2.762749ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-962100 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-962100 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [153aa2f7-ecd5-4dcf-ae6f-65a271abf7b8] Pending
helpers_test.go:352: "task-pv-pod" [153aa2f7-ecd5-4dcf-ae6f-65a271abf7b8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [153aa2f7-ecd5-4dcf-ae6f-65a271abf7b8] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003299865s
addons_test.go:572: (dbg) Run:  kubectl --context addons-962100 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-962100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-962100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-962100 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-962100 delete pod task-pv-pod: (1.162169761s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-962100 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-962100 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-962100 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [0fc0a242-ee39-47b4-99f1-68a5aae6fd24] Pending
helpers_test.go:352: "task-pv-pod-restore" [0fc0a242-ee39-47b4-99f1-68a5aae6fd24] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [0fc0a242-ee39-47b4-99f1-68a5aae6fd24] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003148121s
addons_test.go:614: (dbg) Run:  kubectl --context addons-962100 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-962100 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-962100 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (245.581236ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:32:02.375312   23224 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:32:02.375623   23224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:32:02.375634   23224 out.go:374] Setting ErrFile to fd 2...
	I1124 08:32:02.375638   23224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:32:02.375843   23224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:32:02.376083   23224 mustload.go:66] Loading cluster: addons-962100
	I1124 08:32:02.376394   23224 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:32:02.376408   23224 addons.go:622] checking whether the cluster is paused
	I1124 08:32:02.376482   23224 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:32:02.376497   23224 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:32:02.376892   23224 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:32:02.395430   23224 ssh_runner.go:195] Run: systemctl --version
	I1124 08:32:02.395488   23224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:32:02.413196   23224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:32:02.513963   23224 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:32:02.514031   23224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:32:02.543024   23224 cri.go:89] found id: "91d297154e6dda1e2f052e15ea1a4f8f73e3907171575a40ea567f89618d4b96"
	I1124 08:32:02.543047   23224 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:32:02.543053   23224 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:32:02.543058   23224 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:32:02.543062   23224 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:32:02.543065   23224 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:32:02.543068   23224 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:32:02.543071   23224 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:32:02.543074   23224 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:32:02.543080   23224 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:32:02.543085   23224 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:32:02.543088   23224 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:32:02.543091   23224 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:32:02.543104   23224 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:32:02.543111   23224 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:32:02.543128   23224 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:32:02.543135   23224 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:32:02.543139   23224 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:32:02.543142   23224 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:32:02.543145   23224 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:32:02.543153   23224 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:32:02.543159   23224 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:32:02.543162   23224 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:32:02.543164   23224 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:32:02.543167   23224 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:32:02.543170   23224 cri.go:89] found id: ""
	I1124 08:32:02.543206   23224 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:32:02.557157   23224 out.go:203] 
	W1124 08:32:02.558561   23224 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:32:02.558579   23224 out.go:285] * 
	* 
	W1124 08:32:02.561820   23224 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:32:02.563178   23224 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (245.097704ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:32:02.621585   23286 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:32:02.621748   23286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:32:02.621758   23286 out.go:374] Setting ErrFile to fd 2...
	I1124 08:32:02.621762   23286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:32:02.622050   23286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:32:02.622419   23286 mustload.go:66] Loading cluster: addons-962100
	I1124 08:32:02.622865   23286 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:32:02.622880   23286 addons.go:622] checking whether the cluster is paused
	I1124 08:32:02.623006   23286 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:32:02.623028   23286 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:32:02.623480   23286 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:32:02.641208   23286 ssh_runner.go:195] Run: systemctl --version
	I1124 08:32:02.641249   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:32:02.658829   23286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:32:02.759048   23286 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:32:02.759122   23286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:32:02.788227   23286 cri.go:89] found id: "91d297154e6dda1e2f052e15ea1a4f8f73e3907171575a40ea567f89618d4b96"
	I1124 08:32:02.788251   23286 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:32:02.788257   23286 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:32:02.788263   23286 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:32:02.788268   23286 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:32:02.788273   23286 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:32:02.788278   23286 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:32:02.788282   23286 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:32:02.788286   23286 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:32:02.788306   23286 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:32:02.788313   23286 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:32:02.788316   23286 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:32:02.788319   23286 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:32:02.788349   23286 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:32:02.788359   23286 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:32:02.788371   23286 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:32:02.788378   23286 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:32:02.788384   23286 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:32:02.788389   23286 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:32:02.788394   23286 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:32:02.788403   23286 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:32:02.788409   23286 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:32:02.788412   23286 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:32:02.788420   23286 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:32:02.788425   23286 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:32:02.788433   23286 cri.go:89] found id: ""
	I1124 08:32:02.788479   23286 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:32:02.803352   23286 out.go:203] 
	W1124 08:32:02.804589   23286 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:32:02.804611   23286 out.go:285] * 
	* 
	W1124 08:32:02.807636   23286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:32:02.808902   23286 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (45.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-962100 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-962100 --alsologtostderr -v=1: exit status 11 (249.936713ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:06.829986   19398 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:06.830165   19398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:06.830176   19398 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:06.830182   19398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:06.830395   19398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:06.830651   19398 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:06.831007   19398 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:06.831024   19398 addons.go:622] checking whether the cluster is paused
	I1124 08:31:06.831147   19398 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:06.831169   19398 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:06.831583   19398 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:06.850492   19398 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:06.850549   19398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:06.868778   19398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:06.969220   19398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:06.969302   19398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:06.997529   19398 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:06.997560   19398 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:06.997567   19398 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:06.997572   19398 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:06.997578   19398 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:06.997590   19398 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:06.997596   19398 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:06.997601   19398 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:06.997606   19398 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:06.997629   19398 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:06.997644   19398 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:06.997652   19398 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:06.997656   19398 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:06.997664   19398 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:06.997671   19398 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:06.997681   19398 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:06.997686   19398 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:06.997693   19398 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:06.997697   19398 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:06.997701   19398 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:06.997709   19398 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:06.997713   19398 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:06.997722   19398 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:06.997733   19398 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:06.997737   19398 cri.go:89] found id: ""
	I1124 08:31:06.997790   19398 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:07.012702   19398 out.go:203] 
	W1124 08:31:07.013963   19398 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:07.013981   19398 out.go:285] * 
	* 
	W1124 08:31:07.017998   19398 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:07.019291   19398 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-962100 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-962100
helpers_test.go:243: (dbg) docker inspect addons-962100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae",
	        "Created": "2025-11-24T08:29:13.070866673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11733,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T08:29:13.101040713Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae/hosts",
	        "LogPath": "/var/lib/docker/containers/69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae/69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae-json.log",
	        "Name": "/addons-962100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-962100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-962100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69fc512320a451906b9d135dc8c91b440f686de9f3d61fb44eb44e27607383ae",
	                "LowerDir": "/var/lib/docker/overlay2/3b6bc159b5d216320aa0ea9a875ab73a1b97dd53f7e04b3c0465272d06b240ad-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b6bc159b5d216320aa0ea9a875ab73a1b97dd53f7e04b3c0465272d06b240ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b6bc159b5d216320aa0ea9a875ab73a1b97dd53f7e04b3c0465272d06b240ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b6bc159b5d216320aa0ea9a875ab73a1b97dd53f7e04b3c0465272d06b240ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-962100",
	                "Source": "/var/lib/docker/volumes/addons-962100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-962100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-962100",
	                "name.minikube.sigs.k8s.io": "addons-962100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "52626c57671cca89feb89fd2332b96fcc59f3db3a4f991b66200c8653078d474",
	            "SandboxKey": "/var/run/docker/netns/52626c57671c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-962100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67fc84e8838d05308c43a0142e49d7ab3d31c453db134ee2419e880ff573d4bb",
	                    "EndpointID": "7d516a0f48abc5ae6c58eb478fadea2b12dd061df0deaad9b6c9f3aa20f61609",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "0a:90:e2:38:98:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-962100",
	                        "69fc512320a4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-962100 -n addons-962100
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-962100 logs -n 25: (1.120568307s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-092707 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-092707   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-092707                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-092707   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-290395 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-290395   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-290395                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-290395   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-029472 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-029472   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-029472                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-029472   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-092707                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-092707   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-290395                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-290395   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-029472                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-029472   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ start   │ --download-only -p download-docker-572456 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-572456 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ -p download-docker-572456                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-572456 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ start   │ --download-only -p binary-mirror-438068 --alsologtostderr --binary-mirror http://127.0.0.1:38621 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-438068   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ -p binary-mirror-438068                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-438068   │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ addons  │ disable dashboard -p addons-962100                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-962100          │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-962100                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-962100          │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ start   │ -p addons-962100 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-962100          │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:30 UTC │
	│ addons  │ addons-962100 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-962100          │ jenkins │ v1.37.0 │ 24 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-962100 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-962100          │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-962100 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-962100          │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:28:49
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:28:49.204741   11068 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:28:49.204852   11068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:49.204861   11068 out.go:374] Setting ErrFile to fd 2...
	I1124 08:28:49.204865   11068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:49.205067   11068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:28:49.205569   11068 out.go:368] Setting JSON to false
	I1124 08:28:49.206352   11068 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":675,"bootTime":1763972254,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:28:49.206408   11068 start.go:143] virtualization: kvm guest
	I1124 08:28:49.208155   11068 out.go:179] * [addons-962100] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:28:49.209307   11068 notify.go:221] Checking for updates...
	I1124 08:28:49.209355   11068 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:28:49.210535   11068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:28:49.211732   11068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:28:49.213040   11068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:28:49.214183   11068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:28:49.215280   11068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:28:49.216584   11068 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:28:49.239578   11068 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:28:49.239680   11068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:28:49.294796   11068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 08:28:49.285552794 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:28:49.294935   11068 docker.go:319] overlay module found
	I1124 08:28:49.296704   11068 out.go:179] * Using the docker driver based on user configuration
	I1124 08:28:49.297781   11068 start.go:309] selected driver: docker
	I1124 08:28:49.297794   11068 start.go:927] validating driver "docker" against <nil>
	I1124 08:28:49.297806   11068 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:28:49.298497   11068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:28:49.350596   11068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 08:28:49.340950974 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:28:49.350764   11068 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:28:49.350945   11068 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 08:28:49.352421   11068 out.go:179] * Using Docker driver with root privileges
	I1124 08:28:49.353624   11068 cni.go:84] Creating CNI manager for ""
	I1124 08:28:49.353684   11068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 08:28:49.353694   11068 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 08:28:49.353742   11068 start.go:353] cluster config:
	{Name:addons-962100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-962100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 08:28:49.355071   11068 out.go:179] * Starting "addons-962100" primary control-plane node in "addons-962100" cluster
	I1124 08:28:49.356085   11068 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 08:28:49.357236   11068 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 08:28:49.358394   11068 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:28:49.358432   11068 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 08:28:49.358440   11068 cache.go:65] Caching tarball of preloaded images
	I1124 08:28:49.358481   11068 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 08:28:49.358539   11068 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 08:28:49.358554   11068 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 08:28:49.358937   11068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/config.json ...
	I1124 08:28:49.358962   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/config.json: {Name:mkdc3c22d4d70a34b7b204e8d62eedb63621a714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:28:49.374881   11068 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 08:28:49.374990   11068 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 08:28:49.375012   11068 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 08:28:49.375021   11068 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 08:28:49.375030   11068 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 08:28:49.375037   11068 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1124 08:29:02.174472   11068 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1124 08:29:02.174506   11068 cache.go:243] Successfully downloaded all kic artifacts
	I1124 08:29:02.174553   11068 start.go:360] acquireMachinesLock for addons-962100: {Name:mk3e2d5d356e4c2edfb09ca9395f801263e4cc51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:02.174656   11068 start.go:364] duration metric: took 81.326µs to acquireMachinesLock for "addons-962100"
	I1124 08:29:02.174684   11068 start.go:93] Provisioning new machine with config: &{Name:addons-962100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-962100 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 08:29:02.174801   11068 start.go:125] createHost starting for "" (driver="docker")
	I1124 08:29:02.176966   11068 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 08:29:02.177187   11068 start.go:159] libmachine.API.Create for "addons-962100" (driver="docker")
	I1124 08:29:02.177216   11068 client.go:173] LocalClient.Create starting
	I1124 08:29:02.177315   11068 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem
	I1124 08:29:02.296893   11068 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem
	I1124 08:29:02.400633   11068 cli_runner.go:164] Run: docker network inspect addons-962100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 08:29:02.418089   11068 cli_runner.go:211] docker network inspect addons-962100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 08:29:02.418152   11068 network_create.go:284] running [docker network inspect addons-962100] to gather additional debugging logs...
	I1124 08:29:02.418170   11068 cli_runner.go:164] Run: docker network inspect addons-962100
	W1124 08:29:02.433948   11068 cli_runner.go:211] docker network inspect addons-962100 returned with exit code 1
	I1124 08:29:02.433975   11068 network_create.go:287] error running [docker network inspect addons-962100]: docker network inspect addons-962100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-962100 not found
	I1124 08:29:02.433987   11068 network_create.go:289] output of [docker network inspect addons-962100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-962100 not found
	
	** /stderr **
	I1124 08:29:02.434079   11068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 08:29:02.450793   11068 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d24860}
	I1124 08:29:02.450825   11068 network_create.go:124] attempt to create docker network addons-962100 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 08:29:02.450864   11068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-962100 addons-962100
	I1124 08:29:02.496424   11068 network_create.go:108] docker network addons-962100 192.168.49.0/24 created
	I1124 08:29:02.496453   11068 kic.go:121] calculated static IP "192.168.49.2" for the "addons-962100" container
	I1124 08:29:02.496511   11068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 08:29:02.513514   11068 cli_runner.go:164] Run: docker volume create addons-962100 --label name.minikube.sigs.k8s.io=addons-962100 --label created_by.minikube.sigs.k8s.io=true
	I1124 08:29:02.531397   11068 oci.go:103] Successfully created a docker volume addons-962100
	I1124 08:29:02.531477   11068 cli_runner.go:164] Run: docker run --rm --name addons-962100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-962100 --entrypoint /usr/bin/test -v addons-962100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 08:29:09.170802   11068 cli_runner.go:217] Completed: docker run --rm --name addons-962100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-962100 --entrypoint /usr/bin/test -v addons-962100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (6.639289207s)
	I1124 08:29:09.170828   11068 oci.go:107] Successfully prepared a docker volume addons-962100
	I1124 08:29:09.170883   11068 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:29:09.170894   11068 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 08:29:09.170936   11068 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-962100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 08:29:12.996139   11068 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-962100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.825153774s)
	I1124 08:29:12.996167   11068 kic.go:203] duration metric: took 3.825268872s to extract preloaded images to volume ...
	W1124 08:29:12.996245   11068 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 08:29:12.996277   11068 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 08:29:12.996317   11068 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 08:29:13.055427   11068 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-962100 --name addons-962100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-962100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-962100 --network addons-962100 --ip 192.168.49.2 --volume addons-962100:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 08:29:13.364108   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Running}}
	I1124 08:29:13.383311   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:13.402899   11068 cli_runner.go:164] Run: docker exec addons-962100 stat /var/lib/dpkg/alternatives/iptables
	I1124 08:29:13.448446   11068 oci.go:144] the created container "addons-962100" has a running status.
	I1124 08:29:13.448471   11068 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa...
	I1124 08:29:13.500824   11068 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 08:29:13.529394   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:13.547314   11068 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 08:29:13.547345   11068 kic_runner.go:114] Args: [docker exec --privileged addons-962100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 08:29:13.585309   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:13.605526   11068 machine.go:94] provisionDockerMachine start ...
	I1124 08:29:13.605624   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:13.623206   11068 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:13.623453   11068 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 08:29:13.623467   11068 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 08:29:13.624700   11068 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33246->127.0.0.1:32768: read: connection reset by peer
	I1124 08:29:16.767386   11068 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-962100
	
	I1124 08:29:16.767417   11068 ubuntu.go:182] provisioning hostname "addons-962100"
	I1124 08:29:16.767487   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:16.785008   11068 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:16.785213   11068 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 08:29:16.785226   11068 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-962100 && echo "addons-962100" | sudo tee /etc/hostname
	I1124 08:29:16.934564   11068 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-962100
	
	I1124 08:29:16.934625   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:16.952129   11068 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:16.952325   11068 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 08:29:16.952368   11068 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-962100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-962100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-962100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 08:29:17.092482   11068 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 08:29:17.092515   11068 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 08:29:17.092554   11068 ubuntu.go:190] setting up certificates
	I1124 08:29:17.092574   11068 provision.go:84] configureAuth start
	I1124 08:29:17.092644   11068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-962100
	I1124 08:29:17.110404   11068 provision.go:143] copyHostCerts
	I1124 08:29:17.110461   11068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 08:29:17.110594   11068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 08:29:17.110655   11068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 08:29:17.110709   11068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.addons-962100 san=[127.0.0.1 192.168.49.2 addons-962100 localhost minikube]
	I1124 08:29:17.174216   11068 provision.go:177] copyRemoteCerts
	I1124 08:29:17.174266   11068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 08:29:17.174297   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.191029   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:17.290055   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 08:29:17.308041   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 08:29:17.324552   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 08:29:17.340497   11068 provision.go:87] duration metric: took 247.906309ms to configureAuth
	I1124 08:29:17.340520   11068 ubuntu.go:206] setting minikube options for container-runtime
	I1124 08:29:17.340680   11068 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:29:17.340767   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.356971   11068 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:17.357257   11068 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 08:29:17.357280   11068 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 08:29:17.632482   11068 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 08:29:17.632506   11068 machine.go:97] duration metric: took 4.026955886s to provisionDockerMachine
	I1124 08:29:17.632518   11068 client.go:176] duration metric: took 15.455291206s to LocalClient.Create
	I1124 08:29:17.632539   11068 start.go:167] duration metric: took 15.455351433s to libmachine.API.Create "addons-962100"
	I1124 08:29:17.632552   11068 start.go:293] postStartSetup for "addons-962100" (driver="docker")
	I1124 08:29:17.632563   11068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 08:29:17.632629   11068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 08:29:17.632673   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.650244   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:17.751671   11068 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 08:29:17.754897   11068 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 08:29:17.754924   11068 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 08:29:17.754936   11068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 08:29:17.754994   11068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 08:29:17.755025   11068 start.go:296] duration metric: took 122.467001ms for postStartSetup
	I1124 08:29:17.755300   11068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-962100
	I1124 08:29:17.773051   11068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/config.json ...
	I1124 08:29:17.773365   11068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 08:29:17.773422   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.789516   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:17.886156   11068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 08:29:17.890419   11068 start.go:128] duration metric: took 15.715606607s to createHost
	I1124 08:29:17.890444   11068 start.go:83] releasing machines lock for "addons-962100", held for 15.715774317s
	I1124 08:29:17.890504   11068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-962100
	I1124 08:29:17.907663   11068 ssh_runner.go:195] Run: cat /version.json
	I1124 08:29:17.907703   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.907759   11068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 08:29:17.907835   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:17.925568   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:17.925991   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:18.021269   11068 ssh_runner.go:195] Run: systemctl --version
	I1124 08:29:18.077074   11068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 08:29:18.110631   11068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 08:29:18.114932   11068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 08:29:18.114995   11068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 08:29:18.139814   11068 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 08:29:18.139839   11068 start.go:496] detecting cgroup driver to use...
	I1124 08:29:18.139866   11068 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 08:29:18.139902   11068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 08:29:18.154687   11068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 08:29:18.166169   11068 docker.go:218] disabling cri-docker service (if available) ...
	I1124 08:29:18.166211   11068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 08:29:18.180981   11068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 08:29:18.197152   11068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 08:29:18.276794   11068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 08:29:18.360249   11068 docker.go:234] disabling docker service ...
	I1124 08:29:18.360302   11068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 08:29:18.376972   11068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 08:29:18.388541   11068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 08:29:18.468512   11068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 08:29:18.548485   11068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 08:29:18.560371   11068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 08:29:18.573732   11068 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.34.2/kubeadm
	I1124 08:29:19.366089   11068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 08:29:19.366150   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.376440   11068 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 08:29:19.376494   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.384571   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.392539   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.400595   11068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 08:29:19.407978   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.415861   11068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.428137   11068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:19.436236   11068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 08:29:19.442760   11068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 08:29:19.442798   11068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 08:29:19.454128   11068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 08:29:19.461695   11068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 08:29:19.536693   11068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 08:29:19.803421   11068 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 08:29:19.803496   11068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 08:29:19.807203   11068 start.go:564] Will wait 60s for crictl version
	I1124 08:29:19.807247   11068 ssh_runner.go:195] Run: which crictl
	I1124 08:29:19.810500   11068 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 08:29:19.833967   11068 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 08:29:19.834066   11068 ssh_runner.go:195] Run: crio --version
	I1124 08:29:19.859618   11068 ssh_runner.go:195] Run: crio --version
	I1124 08:29:19.886959   11068 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 08:29:19.888122   11068 cli_runner.go:164] Run: docker network inspect addons-962100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 08:29:19.904835   11068 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 08:29:19.908631   11068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 08:29:19.918141   11068 kubeadm.go:884] updating cluster {Name:addons-962100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-962100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 08:29:19.918315   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.071372   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.218390   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.360631   11068 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:29:20.360790   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.505079   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.674248   11068 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:20.815718   11068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 08:29:20.845250   11068 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 08:29:20.845271   11068 crio.go:433] Images already preloaded, skipping extraction
	I1124 08:29:20.845310   11068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 08:29:20.868135   11068 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 08:29:20.868155   11068 cache_images.go:86] Images are preloaded, skipping loading
	I1124 08:29:20.868163   11068 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1124 08:29:20.868240   11068 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-962100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-962100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 08:29:20.868298   11068 ssh_runner.go:195] Run: crio config
	I1124 08:29:20.911083   11068 cni.go:84] Creating CNI manager for ""
	I1124 08:29:20.911103   11068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 08:29:20.911118   11068 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 08:29:20.911143   11068 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-962100 NodeName:addons-962100 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 08:29:20.911250   11068 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-962100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 08:29:20.911310   11068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 08:29:20.919005   11068 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 08:29:20.919058   11068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 08:29:20.926370   11068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 08:29:20.938246   11068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 08:29:20.952585   11068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 08:29:20.964634   11068 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 08:29:20.968178   11068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 08:29:20.977806   11068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 08:29:21.055565   11068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 08:29:21.078627   11068 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100 for IP: 192.168.49.2
	I1124 08:29:21.078649   11068 certs.go:195] generating shared ca certs ...
	I1124 08:29:21.078667   11068 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.078784   11068 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 08:29:21.133399   11068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt ...
	I1124 08:29:21.133431   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt: {Name:mkec0fc3ca0f5dbe0072c3481bb90432f10f6787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.133630   11068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key ...
	I1124 08:29:21.133643   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key: {Name:mk97df4b1f29dbb411889911d17e112c712f049b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.133743   11068 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 08:29:21.315716   11068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt ...
	I1124 08:29:21.315744   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt: {Name:mk540fafc920f8ec7c9e11ac00269b1fb38df736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.315938   11068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key ...
	I1124 08:29:21.315956   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key: {Name:mk1eb8415a321de0ec27a8a4e25a4deffcde087f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.316052   11068 certs.go:257] generating profile certs ...
	I1124 08:29:21.316128   11068 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.key
	I1124 08:29:21.316145   11068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt with IP's: []
	I1124 08:29:21.421292   11068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt ...
	I1124 08:29:21.421324   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: {Name:mka27ab45669c8e04fef7a48ddac74e354a5583b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.421536   11068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.key ...
	I1124 08:29:21.421554   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.key: {Name:mk92a81950b37e61a5f798503f887e153123cd4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.421649   11068 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key.b64fbab1
	I1124 08:29:21.421673   11068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt.b64fbab1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 08:29:21.551793   11068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt.b64fbab1 ...
	I1124 08:29:21.551829   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt.b64fbab1: {Name:mk968d6aaade4ffffb613ba8d7ba168f2f3bffb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.551988   11068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key.b64fbab1 ...
	I1124 08:29:21.552001   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key.b64fbab1: {Name:mk85fb87eb5bc673a2157245b545efee57e2c140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.552066   11068 certs.go:382] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt.b64fbab1 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt
	I1124 08:29:21.552158   11068 certs.go:386] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key.b64fbab1 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key
	I1124 08:29:21.552212   11068 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.key
	I1124 08:29:21.552229   11068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.crt with IP's: []
	I1124 08:29:21.580844   11068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.crt ...
	I1124 08:29:21.580881   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.crt: {Name:mk767ceea1701aee964ef40b5686c0c967807c00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.581026   11068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.key ...
	I1124 08:29:21.581037   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.key: {Name:mk5b16a40fe0ef51d8e786498ae7b29cb6116001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:21.581192   11068 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 08:29:21.581225   11068 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 08:29:21.581253   11068 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 08:29:21.581277   11068 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 08:29:21.581835   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 08:29:21.599009   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 08:29:21.615285   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 08:29:21.631735   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 08:29:21.647931   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 08:29:21.664190   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 08:29:21.680344   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 08:29:21.696682   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 08:29:21.712854   11068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 08:29:21.731228   11068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 08:29:21.742926   11068 ssh_runner.go:195] Run: openssl version
	I1124 08:29:21.748819   11068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 08:29:21.759435   11068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 08:29:21.763072   11068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 08:29:21.763117   11068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 08:29:21.796840   11068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 08:29:21.805439   11068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 08:29:21.808934   11068 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 08:29:21.808987   11068 kubeadm.go:401] StartCluster: {Name:addons-962100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-962100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:29:21.809062   11068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:29:21.809110   11068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:29:21.835442   11068 cri.go:89] found id: ""
	I1124 08:29:21.835495   11068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 08:29:21.843113   11068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 08:29:21.850375   11068 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 08:29:21.850433   11068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 08:29:21.857622   11068 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 08:29:21.857640   11068 kubeadm.go:158] found existing configuration files:
	
	I1124 08:29:21.857671   11068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 08:29:21.864842   11068 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 08:29:21.864896   11068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 08:29:21.871713   11068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 08:29:21.878766   11068 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 08:29:21.878816   11068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 08:29:21.885568   11068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 08:29:21.892544   11068 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 08:29:21.892597   11068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 08:29:21.899265   11068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 08:29:21.906198   11068 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 08:29:21.906244   11068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 08:29:21.913130   11068 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 08:29:21.980613   11068 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 08:29:22.041829   11068 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 08:29:31.168015   11068 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 08:29:31.168063   11068 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 08:29:31.168143   11068 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 08:29:31.168234   11068 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 08:29:31.168303   11068 kubeadm.go:319] OS: Linux
	I1124 08:29:31.168380   11068 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 08:29:31.168457   11068 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 08:29:31.168521   11068 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 08:29:31.168584   11068 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 08:29:31.168647   11068 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 08:29:31.168720   11068 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 08:29:31.168784   11068 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 08:29:31.168839   11068 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 08:29:31.168943   11068 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 08:29:31.169076   11068 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 08:29:31.169205   11068 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 08:29:31.169295   11068 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 08:29:31.170708   11068 out.go:252]   - Generating certificates and keys ...
	I1124 08:29:31.170791   11068 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 08:29:31.170868   11068 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 08:29:31.170954   11068 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 08:29:31.171012   11068 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 08:29:31.171088   11068 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 08:29:31.171139   11068 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 08:29:31.171185   11068 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 08:29:31.171317   11068 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-962100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 08:29:31.171467   11068 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 08:29:31.171616   11068 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-962100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 08:29:31.171700   11068 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 08:29:31.171774   11068 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 08:29:31.171845   11068 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 08:29:31.171930   11068 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 08:29:31.172007   11068 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 08:29:31.172064   11068 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 08:29:31.172111   11068 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 08:29:31.172174   11068 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 08:29:31.172235   11068 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 08:29:31.172326   11068 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 08:29:31.172432   11068 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 08:29:31.173625   11068 out.go:252]   - Booting up control plane ...
	I1124 08:29:31.173699   11068 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 08:29:31.173767   11068 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 08:29:31.173835   11068 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 08:29:31.173946   11068 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 08:29:31.174051   11068 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 08:29:31.174191   11068 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 08:29:31.174304   11068 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 08:29:31.174371   11068 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 08:29:31.174497   11068 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 08:29:31.174608   11068 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 08:29:31.174697   11068 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.32868ms
	I1124 08:29:31.174811   11068 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 08:29:31.174920   11068 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 08:29:31.175056   11068 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 08:29:31.175171   11068 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 08:29:31.175252   11068 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.449484s
	I1124 08:29:31.175325   11068 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.192961736s
	I1124 08:29:31.175445   11068 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001276406s
	I1124 08:29:31.175543   11068 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 08:29:31.175653   11068 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 08:29:31.175705   11068 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 08:29:31.175862   11068 kubeadm.go:319] [mark-control-plane] Marking the node addons-962100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 08:29:31.175916   11068 kubeadm.go:319] [bootstrap-token] Using token: cxd9y9.cgdhsbm31ng53iju
	I1124 08:29:31.177075   11068 out.go:252]   - Configuring RBAC rules ...
	I1124 08:29:31.177163   11068 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 08:29:31.177234   11068 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 08:29:31.177391   11068 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 08:29:31.177584   11068 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 08:29:31.177690   11068 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 08:29:31.177798   11068 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 08:29:31.177975   11068 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 08:29:31.178025   11068 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 08:29:31.178098   11068 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 08:29:31.178112   11068 kubeadm.go:319] 
	I1124 08:29:31.178172   11068 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 08:29:31.178181   11068 kubeadm.go:319] 
	I1124 08:29:31.178269   11068 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 08:29:31.178284   11068 kubeadm.go:319] 
	I1124 08:29:31.178327   11068 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 08:29:31.178428   11068 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 08:29:31.178502   11068 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 08:29:31.178513   11068 kubeadm.go:319] 
	I1124 08:29:31.178578   11068 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 08:29:31.178584   11068 kubeadm.go:319] 
	I1124 08:29:31.178623   11068 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 08:29:31.178629   11068 kubeadm.go:319] 
	I1124 08:29:31.178672   11068 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 08:29:31.178741   11068 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 08:29:31.178801   11068 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 08:29:31.178806   11068 kubeadm.go:319] 
	I1124 08:29:31.178875   11068 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 08:29:31.178940   11068 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 08:29:31.178945   11068 kubeadm.go:319] 
	I1124 08:29:31.179040   11068 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cxd9y9.cgdhsbm31ng53iju \
	I1124 08:29:31.179147   11068 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 \
	I1124 08:29:31.179167   11068 kubeadm.go:319] 	--control-plane 
	I1124 08:29:31.179174   11068 kubeadm.go:319] 
	I1124 08:29:31.179249   11068 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 08:29:31.179255   11068 kubeadm.go:319] 
	I1124 08:29:31.179373   11068 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cxd9y9.cgdhsbm31ng53iju \
	I1124 08:29:31.179512   11068 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 
	I1124 08:29:31.179528   11068 cni.go:84] Creating CNI manager for ""
	I1124 08:29:31.179538   11068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 08:29:31.180916   11068 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 08:29:31.182097   11068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 08:29:31.186553   11068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 08:29:31.186569   11068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 08:29:31.199056   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 08:29:31.389470   11068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 08:29:31.389537   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-962100 minikube.k8s.io/updated_at=2025_11_24T08_29_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=addons-962100 minikube.k8s.io/primary=true
	I1124 08:29:31.389607   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:31.466818   11068 ops.go:34] apiserver oom_adj: -16
	I1124 08:29:31.466935   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:31.967874   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:32.467036   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:32.967571   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:33.467079   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:33.967316   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:34.467111   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:34.967642   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:35.467723   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:35.967683   11068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:29:36.030290   11068 kubeadm.go:1114] duration metric: took 4.640710394s to wait for elevateKubeSystemPrivileges
	I1124 08:29:36.030326   11068 kubeadm.go:403] duration metric: took 14.221343677s to StartCluster
	I1124 08:29:36.030373   11068 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:36.030496   11068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:29:36.030962   11068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:36.031170   11068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 08:29:36.031201   11068 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 08:29:36.031263   11068 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 08:29:36.031396   11068 addons.go:70] Setting yakd=true in profile "addons-962100"
	I1124 08:29:36.031413   11068 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-962100"
	I1124 08:29:36.031420   11068 addons.go:239] Setting addon yakd=true in "addons-962100"
	I1124 08:29:36.031429   11068 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-962100"
	I1124 08:29:36.031445   11068 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:29:36.031455   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031460   11068 addons.go:70] Setting registry-creds=true in profile "addons-962100"
	I1124 08:29:36.031476   11068 addons.go:70] Setting cloud-spanner=true in profile "addons-962100"
	I1124 08:29:36.031483   11068 addons.go:239] Setting addon registry-creds=true in "addons-962100"
	I1124 08:29:36.031488   11068 addons.go:239] Setting addon cloud-spanner=true in "addons-962100"
	I1124 08:29:36.031481   11068 addons.go:70] Setting metrics-server=true in profile "addons-962100"
	I1124 08:29:36.031504   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031503   11068 addons.go:70] Setting storage-provisioner=true in profile "addons-962100"
	I1124 08:29:36.031507   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031517   11068 addons.go:239] Setting addon storage-provisioner=true in "addons-962100"
	I1124 08:29:36.031522   11068 addons.go:239] Setting addon metrics-server=true in "addons-962100"
	I1124 08:29:36.031539   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031554   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031555   11068 addons.go:70] Setting default-storageclass=true in profile "addons-962100"
	I1124 08:29:36.031580   11068 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-962100"
	I1124 08:29:36.031893   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031967   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031978   11068 addons.go:70] Setting volcano=true in profile "addons-962100"
	I1124 08:29:36.031988   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031997   11068 addons.go:70] Setting volumesnapshots=true in profile "addons-962100"
	I1124 08:29:36.032008   11068 addons.go:70] Setting registry=true in profile "addons-962100"
	I1124 08:29:36.032013   11068 addons.go:70] Setting inspektor-gadget=true in profile "addons-962100"
	I1124 08:29:36.032026   11068 addons.go:239] Setting addon registry=true in "addons-962100"
	I1124 08:29:36.032029   11068 addons.go:239] Setting addon inspektor-gadget=true in "addons-962100"
	I1124 08:29:36.032046   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.032057   11068 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-962100"
	I1124 08:29:36.032068   11068 addons.go:70] Setting gcp-auth=true in profile "addons-962100"
	I1124 08:29:36.032087   11068 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-962100"
	I1124 08:29:36.032091   11068 mustload.go:66] Loading cluster: addons-962100
	I1124 08:29:36.032103   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.032148   11068 addons.go:70] Setting ingress=true in profile "addons-962100"
	I1124 08:29:36.032162   11068 addons.go:239] Setting addon ingress=true in "addons-962100"
	I1124 08:29:36.032189   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.032250   11068 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:29:36.032481   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.032488   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.032520   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.032639   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031967   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031467   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.034528   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.032047   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.031988   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031998   11068 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-962100"
	I1124 08:29:36.035177   11068 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-962100"
	I1124 08:29:36.031494   11068 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-962100"
	I1124 08:29:36.035207   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.035218   11068 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-962100"
	I1124 08:29:36.035325   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.035489   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.035640   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.036434   11068 out.go:179] * Verifying Kubernetes components...
	I1124 08:29:36.031401   11068 addons.go:70] Setting ingress-dns=true in profile "addons-962100"
	I1124 08:29:36.036551   11068 addons.go:239] Setting addon ingress-dns=true in "addons-962100"
	I1124 08:29:36.036585   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.037071   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.031991   11068 addons.go:239] Setting addon volcano=true in "addons-962100"
	I1124 08:29:36.037495   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.032002   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.032018   11068 addons.go:239] Setting addon volumesnapshots=true in "addons-962100"
	I1124 08:29:36.038270   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.038281   11068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 08:29:36.045985   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.045985   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.093253   11068 addons.go:239] Setting addon default-storageclass=true in "addons-962100"
	I1124 08:29:36.093384   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.093615   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.094027   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.098553   11068 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 08:29:36.100495   11068 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 08:29:36.101884   11068 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 08:29:36.101913   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 08:29:36.101964   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.104644   11068 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 08:29:36.105951   11068 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 08:29:36.106844   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 08:29:36.106918   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.112728   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 08:29:36.115137   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 08:29:36.116996   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 08:29:36.117562   11068 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 08:29:36.118409   11068 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 08:29:36.119511   11068 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 08:29:36.119675   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 08:29:36.119836   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.120442   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 08:29:36.121794   11068 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 08:29:36.121813   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 08:29:36.121869   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.122027   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 08:29:36.123240   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 08:29:36.123295   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 08:29:36.124404   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 08:29:36.124421   11068 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 08:29:36.124471   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.124479   11068 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 08:29:36.125539   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 08:29:36.125758   11068 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 08:29:36.125998   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 08:29:36.126232   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.137193   11068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 08:29:36.140190   11068 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 08:29:36.140318   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 08:29:36.140350   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 08:29:36.140435   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.141524   11068 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 08:29:36.141544   11068 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 08:29:36.141600   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.156719   11068 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 08:29:36.158121   11068 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-962100"
	I1124 08:29:36.158158   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:36.159029   11068 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 08:29:36.159047   11068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 08:29:36.159152   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.159806   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:36.165752   11068 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 08:29:36.167253   11068 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 08:29:36.167294   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 08:29:36.167399   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.169283   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.170626   11068 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 08:29:36.171652   11068 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 08:29:36.171674   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 08:29:36.171723   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	W1124 08:29:36.172274   11068 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 08:29:36.176934   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.178087   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.178572   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.194294   11068 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 08:29:36.196413   11068 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 08:29:36.196443   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 08:29:36.196503   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.197775   11068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 08:29:36.200920   11068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 08:29:36.202809   11068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 08:29:36.205436   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.209426   11068 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 08:29:36.209451   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 08:29:36.209528   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.211151   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.215042   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.221532   11068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 08:29:36.225528   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.228607   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.228668   11068 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 08:29:36.228681   11068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 08:29:36.228727   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.232133   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.246650   11068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 08:29:36.248446   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.255526   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.257790   11068 out.go:179]   - Using image docker.io/busybox:stable
	I1124 08:29:36.258945   11068 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 08:29:36.260229   11068 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 08:29:36.260289   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 08:29:36.260372   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:36.267412   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.267941   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	W1124 08:29:36.269950   11068 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 08:29:36.270448   11068 retry.go:31] will retry after 313.082357ms: ssh: handshake failed: EOF
	I1124 08:29:36.289469   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:36.362304   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 08:29:36.381628   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 08:29:36.385302   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 08:29:36.385344   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 08:29:36.401992   11068 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 08:29:36.402018   11068 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 08:29:36.407723   11068 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 08:29:36.407761   11068 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 08:29:36.417738   11068 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 08:29:36.417759   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 08:29:36.424059   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 08:29:36.424081   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 08:29:36.426609   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 08:29:36.430119   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 08:29:36.430825   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 08:29:36.437889   11068 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 08:29:36.437917   11068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 08:29:36.445727   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 08:29:36.447712   11068 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 08:29:36.447730   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 08:29:36.455562   11068 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 08:29:36.455584   11068 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 08:29:36.459797   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 08:29:36.463703   11068 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 08:29:36.463722   11068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 08:29:36.481731   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 08:29:36.481760   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 08:29:36.488935   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 08:29:36.493994   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 08:29:36.499740   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 08:29:36.503088   11068 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 08:29:36.503111   11068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 08:29:36.510766   11068 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 08:29:36.510790   11068 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 08:29:36.512559   11068 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 08:29:36.512584   11068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 08:29:36.546505   11068 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 08:29:36.546540   11068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 08:29:36.549740   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 08:29:36.549761   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 08:29:36.570514   11068 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 08:29:36.570541   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 08:29:36.578550   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 08:29:36.589199   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 08:29:36.589231   11068 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 08:29:36.609720   11068 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 08:29:36.609771   11068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 08:29:36.634550   11068 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 08:29:36.636013   11068 node_ready.go:35] waiting up to 6m0s for node "addons-962100" to be "Ready" ...
	I1124 08:29:36.636687   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 08:29:36.653601   11068 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 08:29:36.653628   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 08:29:36.671562   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 08:29:36.671586   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 08:29:36.704768   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 08:29:36.747245   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 08:29:36.747266   11068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 08:29:36.802886   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 08:29:36.802906   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 08:29:36.809792   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 08:29:36.830146   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 08:29:36.830169   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 08:29:36.893578   11068 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 08:29:36.893605   11068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 08:29:36.929067   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 08:29:37.140528   11068 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-962100" context rescaled to 1 replicas
	I1124 08:29:37.510637   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.080438247s)
	I1124 08:29:37.510730   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.079884788s)
	I1124 08:29:37.510782   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.065035763s)
	I1124 08:29:37.510861   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.051040652s)
	I1124 08:29:37.511154   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.022188905s)
	I1124 08:29:37.511220   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.01720527s)
	I1124 08:29:37.511283   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.011509055s)
	I1124 08:29:37.511307   11068 addons.go:495] Verifying addon registry=true in "addons-962100"
	I1124 08:29:37.511426   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.08479159s)
	I1124 08:29:37.511449   11068 addons.go:495] Verifying addon ingress=true in "addons-962100"
	I1124 08:29:37.511974   11068 addons.go:495] Verifying addon metrics-server=true in "addons-962100"
	I1124 08:29:37.516711   11068 out.go:179] * Verifying registry addon...
	I1124 08:29:37.516790   11068 out.go:179] * Verifying ingress addon...
	I1124 08:29:37.516826   11068 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-962100 service yakd-dashboard -n yakd-dashboard
	
	I1124 08:29:37.519116   11068 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 08:29:37.519159   11068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 08:29:37.526809   11068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 08:29:37.526836   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:37.527042   11068 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 08:29:37.527061   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:38.021705   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:38.021916   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:38.024261   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.319458307s)
	I1124 08:29:38.024309   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.214491375s)
	W1124 08:29:38.024315   11068 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 08:29:38.024360   11068 retry.go:31] will retry after 313.967072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 08:29:38.024540   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.095370317s)
	I1124 08:29:38.024566   11068 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-962100"
	I1124 08:29:38.026385   11068 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 08:29:38.028670   11068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 08:29:38.033057   11068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 08:29:38.033084   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:38.339032   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 08:29:38.522729   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:38.522869   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:38.530969   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:38.638429   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:39.021887   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:39.022035   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:39.030715   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:39.522352   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:39.522480   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:39.531271   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:40.022073   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:40.022251   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:40.031099   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:40.522425   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:40.522561   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:40.531176   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:40.639061   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:40.806583   11068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.467510719s)
	I1124 08:29:41.022857   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:41.022987   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:41.030883   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:41.522574   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:41.522804   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:41.531442   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:42.022240   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:42.022459   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:42.031047   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:42.522089   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:42.522314   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:42.531083   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:43.021828   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:43.021951   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:43.030615   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:43.139178   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:43.522423   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:43.522572   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:43.531407   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:43.699862   11068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 08:29:43.699924   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:43.716899   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:43.821133   11068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 08:29:43.832643   11068 addons.go:239] Setting addon gcp-auth=true in "addons-962100"
	I1124 08:29:43.832690   11068 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:29:43.833012   11068 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:29:43.850328   11068 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 08:29:43.850392   11068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:29:43.867451   11068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:29:43.965832   11068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 08:29:43.967079   11068 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 08:29:43.968049   11068 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 08:29:43.968069   11068 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 08:29:43.981280   11068 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 08:29:43.981300   11068 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 08:29:43.993466   11068 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 08:29:43.993502   11068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 08:29:44.005368   11068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 08:29:44.022198   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:44.022396   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:44.031596   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:44.295528   11068 addons.go:495] Verifying addon gcp-auth=true in "addons-962100"
	I1124 08:29:44.296881   11068 out.go:179] * Verifying gcp-auth addon...
	I1124 08:29:44.298908   11068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 08:29:44.301096   11068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 08:29:44.301113   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:44.521872   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:44.522002   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:44.530739   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:44.801453   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:45.022158   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:45.022316   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:45.031217   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:45.302309   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:45.522075   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:45.522153   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:45.530856   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:45.639237   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:45.801566   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:46.022274   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:46.022356   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:46.031030   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:46.301918   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:46.522316   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:46.522493   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:46.531374   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:46.802166   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:47.021789   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:47.021838   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:47.030459   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:47.302012   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:47.522862   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:47.522942   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:47.530631   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:47.801277   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:48.021790   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:48.021888   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:48.030913   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:48.139496   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:48.302012   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:48.522457   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:48.522649   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:48.531285   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:48.802069   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:49.021424   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:49.021466   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:49.031304   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:49.301270   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:49.521908   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:49.521963   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:49.530619   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:49.801356   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:50.021955   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:50.022150   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:50.031185   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:50.302035   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:50.522580   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:50.522723   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:50.531570   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:50.638979   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:50.801429   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:51.022444   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:51.022581   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:51.031413   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:51.302199   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:51.522129   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:51.522162   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:51.530999   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:51.802155   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:52.021753   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:52.021804   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:52.030720   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:52.301361   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:52.522170   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:52.522393   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:52.530776   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:52.640703   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:52.802013   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:53.022451   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:53.022635   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:53.031384   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:53.301882   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:53.522744   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:53.522793   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:53.531546   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:53.802404   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:54.022030   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:54.022136   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:54.030842   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:54.301829   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:54.522470   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:54.522588   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:54.531135   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:54.801886   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:55.022587   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:55.022624   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:55.031183   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:55.138618   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:55.301901   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:55.522869   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:55.522922   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:55.530778   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:55.801558   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:56.022171   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:56.022202   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:56.030964   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:56.301936   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:56.522372   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:56.522553   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:56.531162   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:56.801949   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:57.022276   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:57.022474   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:57.031241   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:57.302173   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:57.521768   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:57.521923   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:57.530800   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:29:57.639227   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:29:57.801568   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:58.021926   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:58.022149   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:58.030675   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:58.301595   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:58.522211   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:58.522441   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:58.531116   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:58.801754   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:59.022233   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:59.022414   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:59.031184   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:59.301919   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:29:59.523161   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:29:59.523255   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:29:59.531109   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:29:59.801780   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:00.022482   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:00.022599   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:00.031682   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:00.138872   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:00.301368   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:00.522050   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:00.522139   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:00.530997   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:00.801840   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:01.022573   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:01.022591   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:01.031607   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:01.301913   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:01.522875   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:01.522928   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:01.530808   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:01.801800   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:02.022383   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:02.022506   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:02.031495   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:02.139089   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:02.301749   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:02.522298   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:02.522433   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:02.531657   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:02.808710   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:03.022531   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:03.022590   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:03.032016   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:03.302211   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:03.521770   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:03.521878   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:03.530786   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:03.801906   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:04.022413   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:04.022569   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:04.031439   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:04.301596   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:04.522220   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:04.522452   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:04.531586   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:04.639000   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:04.801425   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:05.022060   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:05.022195   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:05.031272   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:05.302171   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:05.521634   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:05.521688   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:05.531819   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:05.801770   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:06.022454   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:06.022675   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:06.031811   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:06.301853   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:06.522589   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:06.522593   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:06.531580   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:06.639175   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:06.801526   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:07.022387   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:07.022375   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:07.031524   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:07.301420   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:07.522057   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:07.522216   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:07.530940   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:07.802046   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:08.021644   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:08.021900   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:08.030623   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:08.301595   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:08.522117   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:08.522197   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:08.531269   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:08.801505   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:09.022128   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:09.022368   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:09.031368   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:09.138883   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:09.301267   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:09.522282   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:09.522374   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:09.531120   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:09.802146   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:10.021612   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:10.021709   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:10.031904   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:10.302055   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:10.521752   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:10.521933   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:10.531041   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:10.801934   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:11.022448   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:11.022585   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:11.031701   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:11.139175   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:11.301651   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:11.522760   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:11.522763   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:11.531817   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:11.801795   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:12.022568   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:12.022641   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:12.031559   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:12.301607   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:12.522212   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:12.522384   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:12.531318   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:12.802453   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:13.022616   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:13.022791   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:13.030963   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:13.139381   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:13.301888   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:13.522718   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:13.522892   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:13.530946   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:13.802270   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:14.022114   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:14.022125   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:14.031181   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:14.301318   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:14.522015   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:14.522097   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:14.530810   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:14.801868   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:15.022884   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:15.023064   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:15.031201   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:15.302296   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:15.521874   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:15.522027   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:15.530842   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 08:30:15.639246   11068 node_ready.go:57] node "addons-962100" has "Ready":"False" status (will retry)
	I1124 08:30:15.801682   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:16.022246   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:16.022379   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:16.031039   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:16.302014   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:16.522712   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:16.522868   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:16.530795   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:16.801590   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:17.022295   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:17.022442   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:17.031128   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:17.301861   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:17.522247   11068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 08:30:17.522267   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:17.522271   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:17.531899   11068 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 08:30:17.531924   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:17.639244   11068 node_ready.go:49] node "addons-962100" is "Ready"
	I1124 08:30:17.639281   11068 node_ready.go:38] duration metric: took 41.003238351s for node "addons-962100" to be "Ready" ...
	I1124 08:30:17.639296   11068 api_server.go:52] waiting for apiserver process to appear ...
	I1124 08:30:17.639363   11068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 08:30:17.658671   11068 api_server.go:72] duration metric: took 41.627435209s to wait for apiserver process to appear ...
	I1124 08:30:17.658700   11068 api_server.go:88] waiting for apiserver healthz status ...
	I1124 08:30:17.658724   11068 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 08:30:17.665065   11068 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 08:30:17.666098   11068 api_server.go:141] control plane version: v1.34.2
	I1124 08:30:17.666125   11068 api_server.go:131] duration metric: took 7.416605ms to wait for apiserver health ...
	I1124 08:30:17.666136   11068 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 08:30:17.671852   11068 system_pods.go:59] 20 kube-system pods found
	I1124 08:30:17.671889   11068 system_pods.go:61] "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:17.671901   11068 system_pods.go:61] "coredns-66bc5c9577-hvw7n" [dfdf69ed-2329-4942-ac69-ab1a57dd2de0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:17.671911   11068 system_pods.go:61] "csi-hostpath-attacher-0" [9d36daba-9c19-43f1-a63f-aae776027942] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:17.671924   11068 system_pods.go:61] "csi-hostpath-resizer-0" [fca87f72-b886-417e-a03c-30bf9b308ee8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 08:30:17.671931   11068 system_pods.go:61] "csi-hostpathplugin-lnrv4" [b94ccba6-c88a-4e9b-b28a-a85ebbefb419] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:17.671939   11068 system_pods.go:61] "etcd-addons-962100" [c489e5fe-67b6-4621-8142-550f2b664cc4] Running
	I1124 08:30:17.671945   11068 system_pods.go:61] "kindnet-kzhgg" [ad47b283-ac11-4c7c-a310-2017634fa058] Running
	I1124 08:30:17.671955   11068 system_pods.go:61] "kube-apiserver-addons-962100" [8948c171-6691-4ce1-a02f-b09a46ca4714] Running
	I1124 08:30:17.671963   11068 system_pods.go:61] "kube-controller-manager-addons-962100" [fecede32-df73-47bf-a85d-c8f667fb6ea2] Running
	I1124 08:30:17.671974   11068 system_pods.go:61] "kube-ingress-dns-minikube" [e1dc05fe-5e82-4ad5-8514-c37eab1b2edc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:17.671982   11068 system_pods.go:61] "kube-proxy-5hrvh" [2bc9bccf-26c6-4131-84e5-abfc1a3fed6f] Running
	I1124 08:30:17.671988   11068 system_pods.go:61] "kube-scheduler-addons-962100" [f5815bf8-d143-424e-b52a-60b2b5d4d2dd] Running
	I1124 08:30:17.671998   11068 system_pods.go:61] "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:17.672007   11068 system_pods.go:61] "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:17.672017   11068 system_pods.go:61] "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:17.672023   11068 system_pods.go:61] "registry-creds-764b6fb674-q7n9p" [b414335f-6ab1-4647-b55f-282ed73c74ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:17.672028   11068 system_pods.go:61] "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:17.672040   11068 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2fbw8" [d050ec7c-b04d-4608-8ac9-5634f110fd45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:17.672054   11068 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lls6s" [b1e23bfe-8813-4aa6-be2e-2a0c9c64e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:17.672065   11068 system_pods.go:61] "storage-provisioner" [632037fc-ac8d-4e90-a57a-dfb70a160ff6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 08:30:17.672076   11068 system_pods.go:74] duration metric: took 5.933136ms to wait for pod list to return data ...
	I1124 08:30:17.672089   11068 default_sa.go:34] waiting for default service account to be created ...
	I1124 08:30:17.674762   11068 default_sa.go:45] found service account: "default"
	I1124 08:30:17.674781   11068 default_sa.go:55] duration metric: took 2.682517ms for default service account to be created ...
	I1124 08:30:17.674791   11068 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 08:30:17.772627   11068 system_pods.go:86] 20 kube-system pods found
	I1124 08:30:17.772658   11068 system_pods.go:89] "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:17.772666   11068 system_pods.go:89] "coredns-66bc5c9577-hvw7n" [dfdf69ed-2329-4942-ac69-ab1a57dd2de0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:17.772672   11068 system_pods.go:89] "csi-hostpath-attacher-0" [9d36daba-9c19-43f1-a63f-aae776027942] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:17.772680   11068 system_pods.go:89] "csi-hostpath-resizer-0" [fca87f72-b886-417e-a03c-30bf9b308ee8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 08:30:17.772686   11068 system_pods.go:89] "csi-hostpathplugin-lnrv4" [b94ccba6-c88a-4e9b-b28a-a85ebbefb419] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:17.772691   11068 system_pods.go:89] "etcd-addons-962100" [c489e5fe-67b6-4621-8142-550f2b664cc4] Running
	I1124 08:30:17.772696   11068 system_pods.go:89] "kindnet-kzhgg" [ad47b283-ac11-4c7c-a310-2017634fa058] Running
	I1124 08:30:17.772700   11068 system_pods.go:89] "kube-apiserver-addons-962100" [8948c171-6691-4ce1-a02f-b09a46ca4714] Running
	I1124 08:30:17.772703   11068 system_pods.go:89] "kube-controller-manager-addons-962100" [fecede32-df73-47bf-a85d-c8f667fb6ea2] Running
	I1124 08:30:17.772710   11068 system_pods.go:89] "kube-ingress-dns-minikube" [e1dc05fe-5e82-4ad5-8514-c37eab1b2edc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:17.772713   11068 system_pods.go:89] "kube-proxy-5hrvh" [2bc9bccf-26c6-4131-84e5-abfc1a3fed6f] Running
	I1124 08:30:17.772717   11068 system_pods.go:89] "kube-scheduler-addons-962100" [f5815bf8-d143-424e-b52a-60b2b5d4d2dd] Running
	I1124 08:30:17.772722   11068 system_pods.go:89] "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:17.772727   11068 system_pods.go:89] "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:17.772739   11068 system_pods.go:89] "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:17.772744   11068 system_pods.go:89] "registry-creds-764b6fb674-q7n9p" [b414335f-6ab1-4647-b55f-282ed73c74ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:17.772752   11068 system_pods.go:89] "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:17.772756   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fbw8" [d050ec7c-b04d-4608-8ac9-5634f110fd45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:17.772766   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lls6s" [b1e23bfe-8813-4aa6-be2e-2a0c9c64e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:17.772771   11068 system_pods.go:89] "storage-provisioner" [632037fc-ac8d-4e90-a57a-dfb70a160ff6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 08:30:17.772786   11068 retry.go:31] will retry after 252.903354ms: missing components: kube-dns
	I1124 08:30:17.870758   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:18.022696   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:18.022933   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:18.029761   11068 system_pods.go:86] 20 kube-system pods found
	I1124 08:30:18.029792   11068 system_pods.go:89] "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:18.029805   11068 system_pods.go:89] "coredns-66bc5c9577-hvw7n" [dfdf69ed-2329-4942-ac69-ab1a57dd2de0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:18.029818   11068 system_pods.go:89] "csi-hostpath-attacher-0" [9d36daba-9c19-43f1-a63f-aae776027942] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:18.029825   11068 system_pods.go:89] "csi-hostpath-resizer-0" [fca87f72-b886-417e-a03c-30bf9b308ee8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 08:30:18.029835   11068 system_pods.go:89] "csi-hostpathplugin-lnrv4" [b94ccba6-c88a-4e9b-b28a-a85ebbefb419] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:18.029848   11068 system_pods.go:89] "etcd-addons-962100" [c489e5fe-67b6-4621-8142-550f2b664cc4] Running
	I1124 08:30:18.029856   11068 system_pods.go:89] "kindnet-kzhgg" [ad47b283-ac11-4c7c-a310-2017634fa058] Running
	I1124 08:30:18.029862   11068 system_pods.go:89] "kube-apiserver-addons-962100" [8948c171-6691-4ce1-a02f-b09a46ca4714] Running
	I1124 08:30:18.029868   11068 system_pods.go:89] "kube-controller-manager-addons-962100" [fecede32-df73-47bf-a85d-c8f667fb6ea2] Running
	I1124 08:30:18.029876   11068 system_pods.go:89] "kube-ingress-dns-minikube" [e1dc05fe-5e82-4ad5-8514-c37eab1b2edc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:18.029881   11068 system_pods.go:89] "kube-proxy-5hrvh" [2bc9bccf-26c6-4131-84e5-abfc1a3fed6f] Running
	I1124 08:30:18.029886   11068 system_pods.go:89] "kube-scheduler-addons-962100" [f5815bf8-d143-424e-b52a-60b2b5d4d2dd] Running
	I1124 08:30:18.029894   11068 system_pods.go:89] "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:18.029901   11068 system_pods.go:89] "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:18.029918   11068 system_pods.go:89] "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:18.029926   11068 system_pods.go:89] "registry-creds-764b6fb674-q7n9p" [b414335f-6ab1-4647-b55f-282ed73c74ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:18.029948   11068 system_pods.go:89] "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:18.029959   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fbw8" [d050ec7c-b04d-4608-8ac9-5634f110fd45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.029971   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lls6s" [b1e23bfe-8813-4aa6-be2e-2a0c9c64e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.029978   11068 system_pods.go:89] "storage-provisioner" [632037fc-ac8d-4e90-a57a-dfb70a160ff6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 08:30:18.029993   11068 retry.go:31] will retry after 274.696351ms: missing components: kube-dns
	I1124 08:30:18.031425   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:18.302742   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:18.309392   11068 system_pods.go:86] 20 kube-system pods found
	I1124 08:30:18.309434   11068 system_pods.go:89] "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:18.309447   11068 system_pods.go:89] "coredns-66bc5c9577-hvw7n" [dfdf69ed-2329-4942-ac69-ab1a57dd2de0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:18.309459   11068 system_pods.go:89] "csi-hostpath-attacher-0" [9d36daba-9c19-43f1-a63f-aae776027942] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:18.309467   11068 system_pods.go:89] "csi-hostpath-resizer-0" [fca87f72-b886-417e-a03c-30bf9b308ee8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 08:30:18.309477   11068 system_pods.go:89] "csi-hostpathplugin-lnrv4" [b94ccba6-c88a-4e9b-b28a-a85ebbefb419] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:18.309486   11068 system_pods.go:89] "etcd-addons-962100" [c489e5fe-67b6-4621-8142-550f2b664cc4] Running
	I1124 08:30:18.309493   11068 system_pods.go:89] "kindnet-kzhgg" [ad47b283-ac11-4c7c-a310-2017634fa058] Running
	I1124 08:30:18.309506   11068 system_pods.go:89] "kube-apiserver-addons-962100" [8948c171-6691-4ce1-a02f-b09a46ca4714] Running
	I1124 08:30:18.309512   11068 system_pods.go:89] "kube-controller-manager-addons-962100" [fecede32-df73-47bf-a85d-c8f667fb6ea2] Running
	I1124 08:30:18.309525   11068 system_pods.go:89] "kube-ingress-dns-minikube" [e1dc05fe-5e82-4ad5-8514-c37eab1b2edc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:18.309534   11068 system_pods.go:89] "kube-proxy-5hrvh" [2bc9bccf-26c6-4131-84e5-abfc1a3fed6f] Running
	I1124 08:30:18.309539   11068 system_pods.go:89] "kube-scheduler-addons-962100" [f5815bf8-d143-424e-b52a-60b2b5d4d2dd] Running
	I1124 08:30:18.309550   11068 system_pods.go:89] "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:18.309557   11068 system_pods.go:89] "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:18.309568   11068 system_pods.go:89] "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:18.309576   11068 system_pods.go:89] "registry-creds-764b6fb674-q7n9p" [b414335f-6ab1-4647-b55f-282ed73c74ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:18.309583   11068 system_pods.go:89] "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:18.309591   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fbw8" [d050ec7c-b04d-4608-8ac9-5634f110fd45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.309599   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lls6s" [b1e23bfe-8813-4aa6-be2e-2a0c9c64e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.309607   11068 system_pods.go:89] "storage-provisioner" [632037fc-ac8d-4e90-a57a-dfb70a160ff6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 08:30:18.309623   11068 retry.go:31] will retry after 299.191807ms: missing components: kube-dns
	I1124 08:30:18.525016   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:18.525296   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:18.533406   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:18.628794   11068 system_pods.go:86] 20 kube-system pods found
	I1124 08:30:18.628833   11068 system_pods.go:89] "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:18.628844   11068 system_pods.go:89] "coredns-66bc5c9577-hvw7n" [dfdf69ed-2329-4942-ac69-ab1a57dd2de0] Running
	I1124 08:30:18.628855   11068 system_pods.go:89] "csi-hostpath-attacher-0" [9d36daba-9c19-43f1-a63f-aae776027942] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:18.628863   11068 system_pods.go:89] "csi-hostpath-resizer-0" [fca87f72-b886-417e-a03c-30bf9b308ee8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 08:30:18.628883   11068 system_pods.go:89] "csi-hostpathplugin-lnrv4" [b94ccba6-c88a-4e9b-b28a-a85ebbefb419] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:18.628894   11068 system_pods.go:89] "etcd-addons-962100" [c489e5fe-67b6-4621-8142-550f2b664cc4] Running
	I1124 08:30:18.628900   11068 system_pods.go:89] "kindnet-kzhgg" [ad47b283-ac11-4c7c-a310-2017634fa058] Running
	I1124 08:30:18.628906   11068 system_pods.go:89] "kube-apiserver-addons-962100" [8948c171-6691-4ce1-a02f-b09a46ca4714] Running
	I1124 08:30:18.628911   11068 system_pods.go:89] "kube-controller-manager-addons-962100" [fecede32-df73-47bf-a85d-c8f667fb6ea2] Running
	I1124 08:30:18.628919   11068 system_pods.go:89] "kube-ingress-dns-minikube" [e1dc05fe-5e82-4ad5-8514-c37eab1b2edc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:18.628924   11068 system_pods.go:89] "kube-proxy-5hrvh" [2bc9bccf-26c6-4131-84e5-abfc1a3fed6f] Running
	I1124 08:30:18.628930   11068 system_pods.go:89] "kube-scheduler-addons-962100" [f5815bf8-d143-424e-b52a-60b2b5d4d2dd] Running
	I1124 08:30:18.628939   11068 system_pods.go:89] "metrics-server-85b7d694d7-mb5jb" [1c39e643-c348-4509-8ded-c2eefb3adf24] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:18.628950   11068 system_pods.go:89] "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:18.628960   11068 system_pods.go:89] "registry-6b586f9694-jtnn9" [e66c4dd7-d6ec-4af1-ab69-00b8319c5ac1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:18.628968   11068 system_pods.go:89] "registry-creds-764b6fb674-q7n9p" [b414335f-6ab1-4647-b55f-282ed73c74ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:18.628977   11068 system_pods.go:89] "registry-proxy-p4gxl" [84b56c07-3055-43ed-86be-24b3fa2dbd82] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:18.628985   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fbw8" [d050ec7c-b04d-4608-8ac9-5634f110fd45] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.628993   11068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lls6s" [b1e23bfe-8813-4aa6-be2e-2a0c9c64e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:18.629005   11068 system_pods.go:89] "storage-provisioner" [632037fc-ac8d-4e90-a57a-dfb70a160ff6] Running
	I1124 08:30:18.629016   11068 system_pods.go:126] duration metric: took 954.218181ms to wait for k8s-apps to be running ...
	I1124 08:30:18.629035   11068 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 08:30:18.629088   11068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 08:30:18.645214   11068 system_svc.go:56] duration metric: took 16.170582ms WaitForService to wait for kubelet
	I1124 08:30:18.645246   11068 kubeadm.go:587] duration metric: took 42.614016345s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 08:30:18.645267   11068 node_conditions.go:102] verifying NodePressure condition ...
	I1124 08:30:18.648194   11068 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 08:30:18.648226   11068 node_conditions.go:123] node cpu capacity is 8
	I1124 08:30:18.648245   11068 node_conditions.go:105] duration metric: took 2.971188ms to run NodePressure ...
	I1124 08:30:18.648260   11068 start.go:242] waiting for startup goroutines ...
	I1124 08:30:18.802071   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:19.023187   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:19.023194   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:19.031033   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:19.302999   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:19.522943   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:19.523134   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:19.624259   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:19.801561   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:20.022480   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:20.022530   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:20.031743   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:20.302547   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:20.523014   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:20.523047   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:20.532195   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:20.802102   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:21.021812   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:21.021884   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:21.031208   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:21.302710   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:21.523537   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:21.523607   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:21.533375   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:21.804974   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:22.023287   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:22.023322   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:22.031803   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:22.302851   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:22.522954   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:22.523018   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:22.531786   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:22.802429   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:23.022450   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:23.022574   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:23.032180   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:23.302958   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:23.523387   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:23.525384   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:23.533987   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:23.802605   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:24.023366   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:24.023582   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:24.033096   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:24.302739   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:24.522910   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:24.523137   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:24.532003   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:24.824629   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:25.022857   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:25.022975   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:25.032546   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:25.302230   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:25.522153   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:25.522542   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:25.531829   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:25.804491   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:26.023253   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:26.023475   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:26.031996   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:26.302006   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:26.523913   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:26.524198   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:26.532001   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:26.802881   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:27.023024   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:27.023238   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:27.031703   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:27.302712   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:27.523222   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:27.523366   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:27.533237   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:27.802588   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:28.023324   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:28.023476   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:28.032477   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:28.302194   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:28.523052   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:28.523058   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:28.531440   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:28.802605   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:29.022876   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:29.022978   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:29.031487   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:29.302682   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:29.523292   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:29.523352   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:29.531866   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:29.802765   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:30.103171   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:30.103252   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:30.103478   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:30.301545   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:30.522776   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:30.522826   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:30.532139   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:30.802191   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:31.023516   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:31.023727   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:31.033062   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:31.301729   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:31.522943   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:31.523122   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:31.531855   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:31.803000   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:32.023612   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:32.023780   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:32.031984   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:32.301601   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:32.522307   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:32.522344   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:32.531495   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:32.802240   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:33.022404   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:33.022438   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:33.031695   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:33.302327   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:33.521937   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:33.521992   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:33.531033   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:33.801855   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:34.022442   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:34.022513   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:34.031708   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:34.302170   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:34.522240   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:34.522318   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:34.531851   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:34.802426   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:35.022541   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:35.022548   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:35.031570   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:35.303355   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:35.522768   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:35.522864   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:35.532012   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:35.801866   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:36.023065   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:36.023280   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:36.031387   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:36.302018   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:36.522900   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:36.523038   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:36.530839   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:36.801374   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:37.021931   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:37.022100   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:37.031408   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:37.302482   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:37.522996   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:37.523047   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:37.532130   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:37.802116   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:38.023559   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:38.023711   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:38.032242   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:38.301741   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:38.522813   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:38.522848   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:38.531657   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:38.803180   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:39.022517   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:39.022520   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:39.031368   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:39.302293   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:39.522882   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:39.522973   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:39.531121   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:39.802075   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:40.022005   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:40.022139   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:40.031502   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:40.301996   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:40.523522   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:40.523670   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:40.531837   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:40.803146   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:41.022050   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:41.022181   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:41.031457   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:41.301789   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:41.523051   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:41.523098   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:41.533931   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:41.802039   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:42.022017   11068 kapi.go:107] duration metric: took 1m4.502855375s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 08:30:42.022041   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:42.031075   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:42.302430   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:42.522225   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:42.532445   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:42.802672   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:43.022788   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:43.032503   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:43.302378   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:43.521815   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:43.533739   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:43.801534   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:44.023204   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:44.031881   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:44.301620   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:44.523181   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:44.532514   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:44.802217   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:45.022081   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:45.031450   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:45.302735   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:45.522833   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:45.531982   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:45.801634   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:46.022733   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:46.032314   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:46.302091   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:46.521910   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:46.531153   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:46.801695   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:47.023239   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:47.031986   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:47.301703   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:47.522502   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:47.531740   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:47.802524   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:48.026392   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:48.031610   11068 kapi.go:107] duration metric: took 1m10.002939716s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 08:30:48.302167   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:48.521863   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:48.801306   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:49.118788   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:49.453774   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:49.554906   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:49.801574   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:50.022918   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:50.301683   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:50.524064   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:50.801136   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:51.022423   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:51.302414   11068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:51.524137   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:51.803719   11068 kapi.go:107] duration metric: took 1m7.504805994s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 08:30:51.805149   11068 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-962100 cluster.
	I1124 08:30:51.806448   11068 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 08:30:51.807371   11068 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 08:30:52.023842   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:52.523393   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:53.022458   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:53.522608   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:54.023224   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:54.522730   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:55.022711   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:55.522641   11068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:56.022948   11068 kapi.go:107] duration metric: took 1m18.503830292s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 08:30:56.024372   11068 out.go:179] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, registry-creds, ingress-dns, nvidia-device-plugin, inspektor-gadget, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1124 08:30:56.025546   11068 addons.go:530] duration metric: took 1m19.994291028s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin registry-creds ingress-dns nvidia-device-plugin inspektor-gadget cloud-spanner metrics-server yakd storage-provisioner-rancher default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1124 08:30:56.025583   11068 start.go:247] waiting for cluster config update ...
	I1124 08:30:56.025600   11068 start.go:256] writing updated cluster config ...
	I1124 08:30:56.025839   11068 ssh_runner.go:195] Run: rm -f paused
	I1124 08:30:56.029863   11068 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 08:30:56.032611   11068 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hvw7n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.036212   11068 pod_ready.go:94] pod "coredns-66bc5c9577-hvw7n" is "Ready"
	I1124 08:30:56.036230   11068 pod_ready.go:86] duration metric: took 3.599935ms for pod "coredns-66bc5c9577-hvw7n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.037947   11068 pod_ready.go:83] waiting for pod "etcd-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.041260   11068 pod_ready.go:94] pod "etcd-addons-962100" is "Ready"
	I1124 08:30:56.041275   11068 pod_ready.go:86] duration metric: took 3.313071ms for pod "etcd-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.042746   11068 pod_ready.go:83] waiting for pod "kube-apiserver-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.045925   11068 pod_ready.go:94] pod "kube-apiserver-addons-962100" is "Ready"
	I1124 08:30:56.045942   11068 pod_ready.go:86] duration metric: took 3.179384ms for pod "kube-apiserver-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.047526   11068 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.434170   11068 pod_ready.go:94] pod "kube-controller-manager-addons-962100" is "Ready"
	I1124 08:30:56.434195   11068 pod_ready.go:86] duration metric: took 386.652582ms for pod "kube-controller-manager-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:56.643897   11068 pod_ready.go:83] waiting for pod "kube-proxy-5hrvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:57.033541   11068 pod_ready.go:94] pod "kube-proxy-5hrvh" is "Ready"
	I1124 08:30:57.033565   11068 pod_ready.go:86] duration metric: took 389.639069ms for pod "kube-proxy-5hrvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:57.233479   11068 pod_ready.go:83] waiting for pod "kube-scheduler-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:57.633683   11068 pod_ready.go:94] pod "kube-scheduler-addons-962100" is "Ready"
	I1124 08:30:57.633708   11068 pod_ready.go:86] duration metric: took 400.206459ms for pod "kube-scheduler-addons-962100" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:30:57.633718   11068 pod_ready.go:40] duration metric: took 1.603834576s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 08:30:57.678387   11068 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 08:30:57.680500   11068 out.go:179] * Done! kubectl is now configured to use "addons-962100" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 08:30:55 addons-962100 crio[772]: time="2025-11-24T08:30:55.265261776Z" level=info msg="Starting container: dcf5971be7507637ec41b6d37b4427c9c63276b3c37b9a12e88d24e9513380b0" id=eb99af51-168c-41d5-bf98-784ef45bc7f0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 08:30:55 addons-962100 crio[772]: time="2025-11-24T08:30:55.26696387Z" level=info msg="Started container" PID=5800 containerID=dcf5971be7507637ec41b6d37b4427c9c63276b3c37b9a12e88d24e9513380b0 description=ingress-nginx/ingress-nginx-controller-6c8bf45fb-6jbv4/controller id=eb99af51-168c-41d5-bf98-784ef45bc7f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b7cbae64d7a87a3bdd25fb0b8b5e63415115f5c7a277a6f52613a2c5b072bb3
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.51873358Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6c72d365-dc08-4081-9e31-de5bd2858d23 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.518799872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.524624646Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:39e10a4fdfab7f36bc76647513ad0c92efb9a42b7349216dd3ec150c70931c6b UID:1c466647-19c8-4bd7-89da-2219f06ffc9a NetNS:/var/run/netns/343043cb-1e56-49aa-800a-5decb4bf5572 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006ba4e0}] Aliases:map[]}"
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.524655132Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.534744004Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:39e10a4fdfab7f36bc76647513ad0c92efb9a42b7349216dd3ec150c70931c6b UID:1c466647-19c8-4bd7-89da-2219f06ffc9a NetNS:/var/run/netns/343043cb-1e56-49aa-800a-5decb4bf5572 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006ba4e0}] Aliases:map[]}"
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.53485908Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.535706004Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.536492411Z" level=info msg="Ran pod sandbox 39e10a4fdfab7f36bc76647513ad0c92efb9a42b7349216dd3ec150c70931c6b with infra container: default/busybox/POD" id=6c72d365-dc08-4081-9e31-de5bd2858d23 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.537718272Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3f9660ac-cf2e-4159-bbd0-739b3168f9e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.537831594Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3f9660ac-cf2e-4159-bbd0-739b3168f9e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.53786447Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3f9660ac-cf2e-4159-bbd0-739b3168f9e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.538482282Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=034c29d0-5057-47f8-958e-2ba5fb4d2f65 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:30:58 addons-962100 crio[772]: time="2025-11-24T08:30:58.539752931Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 08:30:59 addons-962100 crio[772]: time="2025-11-24T08:30:59.715591223Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=034c29d0-5057-47f8-958e-2ba5fb4d2f65 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:30:59 addons-962100 crio[772]: time="2025-11-24T08:30:59.716205157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=78033d6f-1ed0-4e73-8159-3ee719e2f935 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:30:59 addons-962100 crio[772]: time="2025-11-24T08:30:59.717709466Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2c6b48e9-ae9f-48e7-b1a9-1c63b2dac8a2 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:30:59 addons-962100 crio[772]: time="2025-11-24T08:30:59.721018902Z" level=info msg="Creating container: default/busybox/busybox" id=96faa343-a398-4df5-9472-4b5b43ec92ed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 08:30:59 addons-962100 crio[772]: time="2025-11-24T08:30:59.721142524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:30:59 addons-962100 crio[772]: time="2025-11-24T08:30:59.727244612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:30:59 addons-962100 crio[772]: time="2025-11-24T08:30:59.727670432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:30:59 addons-962100 crio[772]: time="2025-11-24T08:30:59.762978402Z" level=info msg="Created container 4194471ba3f853bb8dfbfe46ca83712117966e87b0b63f89b578ccd98bcbd805: default/busybox/busybox" id=96faa343-a398-4df5-9472-4b5b43ec92ed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 08:30:59 addons-962100 crio[772]: time="2025-11-24T08:30:59.76358935Z" level=info msg="Starting container: 4194471ba3f853bb8dfbfe46ca83712117966e87b0b63f89b578ccd98bcbd805" id=afb26f40-463c-4e7f-ad03-417537491424 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 08:30:59 addons-962100 crio[772]: time="2025-11-24T08:30:59.765404019Z" level=info msg="Started container" PID=6192 containerID=4194471ba3f853bb8dfbfe46ca83712117966e87b0b63f89b578ccd98bcbd805 description=default/busybox/busybox id=afb26f40-463c-4e7f-ad03-417537491424 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39e10a4fdfab7f36bc76647513ad0c92efb9a42b7349216dd3ec150c70931c6b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	4194471ba3f85       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   39e10a4fdfab7       busybox                                    default
	dcf5971be7507       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             12 seconds ago       Running             controller                               0                   7b7cbae64d7a8       ingress-nginx-controller-6c8bf45fb-6jbv4   ingress-nginx
	8ef7618202431       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 16 seconds ago       Running             gcp-auth                                 0                   fd5a17a7b82b7       gcp-auth-78565c9fb4-s884b                  gcp-auth
	0b6bed5093f7a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          20 seconds ago       Running             csi-snapshotter                          0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	4b4e46a4d1356       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          21 seconds ago       Running             csi-provisioner                          0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	ab4c77cb74d98       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            22 seconds ago       Running             liveness-probe                           0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	9c0b3f7c96a76       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           23 seconds ago       Running             hostpath                                 0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	f12215527d7fc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            23 seconds ago       Running             gadget                                   0                   a20861c13e13b       gadget-k7jjg                               gadget
	fe44815b0c642       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                26 seconds ago       Running             node-driver-registrar                    0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	7ebc78750ca51       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              26 seconds ago       Running             registry-proxy                           0                   03d4e5db7133f       registry-proxy-p4gxl                       kube-system
	b825e1d2b115c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   28 seconds ago       Running             csi-external-health-monitor-controller   0                   708740b70617f       csi-hostpathplugin-lnrv4                   kube-system
	8bbd9f289c92c       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     29 seconds ago       Running             nvidia-device-plugin-ctr                 0                   28addcd28d33c       nvidia-device-plugin-daemonset-mf4wk       kube-system
	b4ac1bd9012ee       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   32 seconds ago       Exited              patch                                    0                   db7458e7b6145       gcp-auth-certs-patch-bqtxl                 gcp-auth
	5c530c99eae1b       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     32 seconds ago       Running             amd-gpu-device-plugin                    0                   09dbd720216cc       amd-gpu-device-plugin-cs5ww                kube-system
	456f839390728       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   32 seconds ago       Exited              create                                   0                   4096a171fa482       gcp-auth-certs-create-bfb66                gcp-auth
	c9e2bec536040       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   33 seconds ago       Exited              patch                                    0                   0d68186fcd987       ingress-nginx-admission-patch-kcqn2        ingress-nginx
	f975d80052ff5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   33 seconds ago       Exited              create                                   0                   9344fd2fc0453       ingress-nginx-admission-create-xv7ps       ingress-nginx
	648f5560e5b98       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      34 seconds ago       Running             volume-snapshot-controller               0                   68cc75a04bcb0       snapshot-controller-7d9fbc56b8-2fbw8       kube-system
	267fb926260fe       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              35 seconds ago       Running             yakd                                     0                   25c2d162621a1       yakd-dashboard-5ff678cb9-xvtxb             yakd-dashboard
	c0296faa52b98       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      37 seconds ago       Running             volume-snapshot-controller               0                   d79856deda5dd       snapshot-controller-7d9fbc56b8-lls6s       kube-system
	6d6152975c279       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             39 seconds ago       Running             csi-attacher                             0                   ce4e0723716c1       csi-hostpath-attacher-0                    kube-system
	9808d8d748e86       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              39 seconds ago       Running             csi-resizer                              0                   4b862b0fb29d3       csi-hostpath-resizer-0                     kube-system
	e5b3bc6f75f2b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               40 seconds ago       Running             minikube-ingress-dns                     0                   debda6c7d872f       kube-ingress-dns-minikube                  kube-system
	d267683000cc4       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               45 seconds ago       Running             cloud-spanner-emulator                   0                   3ea993c1fb82a       cloud-spanner-emulator-5bdddb765-qhv6q     default
	ed9045036451f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             47 seconds ago       Running             local-path-provisioner                   0                   61363cd3ab4cb       local-path-provisioner-648f6765c9-5wm5f    local-path-storage
	f40ff74c2839e       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           48 seconds ago       Running             registry                                 0                   22b44cf6480c3       registry-6b586f9694-jtnn9                  kube-system
	f6f66d85b0739       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        49 seconds ago       Running             metrics-server                           0                   ce6b67c4dbe69       metrics-server-85b7d694d7-mb5jb            kube-system
	57a5df478ca20       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             50 seconds ago       Running             coredns                                  0                   b42534c1a83bc       coredns-66bc5c9577-hvw7n                   kube-system
	dedc546cfa8d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             50 seconds ago       Running             storage-provisioner                      0                   29204f49b78d1       storage-provisioner                        kube-system
	c2361bae81167       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   01b66c624db6d       kindnet-kzhgg                              kube-system
	c3e272d2f60e0       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   bed6cf1ba640b       kube-proxy-5hrvh                           kube-system
	4d0e2042b8500       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   f06bdd195677a       kube-apiserver-addons-962100               kube-system
	a0cbba27959f4       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   e4b1077fa7da2       etcd-addons-962100                         kube-system
	b758fa8074d44       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   6976303d62a23       kube-controller-manager-addons-962100      kube-system
	5d9b85005d8ee       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   643e424bd1b17       kube-scheduler-addons-962100               kube-system
	
	
	==> coredns [57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50] <==
	[INFO] 10.244.0.18:43280 - 10677 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013984s
	[INFO] 10.244.0.18:58081 - 57654 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106125s
	[INFO] 10.244.0.18:58081 - 57392 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082255s
	[INFO] 10.244.0.18:54602 - 11113 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.00007132s
	[INFO] 10.244.0.18:54602 - 10755 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000088788s
	[INFO] 10.244.0.18:59356 - 47885 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000042911s
	[INFO] 10.244.0.18:59356 - 47592 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000087476s
	[INFO] 10.244.0.18:44896 - 13402 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000044948s
	[INFO] 10.244.0.18:44896 - 13154 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000057293s
	[INFO] 10.244.0.18:51569 - 15195 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088552s
	[INFO] 10.244.0.18:51569 - 15045 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000103499s
	[INFO] 10.244.0.22:60273 - 28702 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188685s
	[INFO] 10.244.0.22:38272 - 51414 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284179s
	[INFO] 10.244.0.22:59768 - 39751 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139226s
	[INFO] 10.244.0.22:43381 - 33915 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000182367s
	[INFO] 10.244.0.22:54589 - 41255 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014615s
	[INFO] 10.244.0.22:52656 - 44124 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000184368s
	[INFO] 10.244.0.22:50663 - 59964 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006314683s
	[INFO] 10.244.0.22:49770 - 56680 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006807088s
	[INFO] 10.244.0.22:40600 - 32872 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006777077s
	[INFO] 10.244.0.22:33824 - 9750 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006954044s
	[INFO] 10.244.0.22:60070 - 34083 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005269102s
	[INFO] 10.244.0.22:33260 - 51789 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005389s
	[INFO] 10.244.0.22:34673 - 21156 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001263523s
	[INFO] 10.244.0.22:60528 - 22691 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002085663s
	
	
	==> describe nodes <==
	Name:               addons-962100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-962100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=addons-962100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T08_29_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-962100
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-962100"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 08:29:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-962100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 08:31:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 08:31:02 +0000   Mon, 24 Nov 2025 08:29:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 08:31:02 +0000   Mon, 24 Nov 2025 08:29:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 08:31:02 +0000   Mon, 24 Nov 2025 08:29:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 08:31:02 +0000   Mon, 24 Nov 2025 08:30:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-962100
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                8fe3bd7f-1ad1-4365-8ebc-47aaf9cc78fb
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5bdddb765-qhv6q      0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  gadget                      gadget-k7jjg                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  gcp-auth                    gcp-auth-78565c9fb4-s884b                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-6jbv4    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         91s
	  kube-system                 amd-gpu-device-plugin-cs5ww                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 coredns-66bc5c9577-hvw7n                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     92s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 csi-hostpathplugin-lnrv4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 etcd-addons-962100                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         98s
	  kube-system                 kindnet-kzhgg                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      93s
	  kube-system                 kube-apiserver-addons-962100                250m (3%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-addons-962100       200m (2%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-5hrvh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-addons-962100                100m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 metrics-server-85b7d694d7-mb5jb             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         91s
	  kube-system                 nvidia-device-plugin-daemonset-mf4wk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 registry-6b586f9694-jtnn9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-creds-764b6fb674-q7n9p             0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 registry-proxy-p4gxl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 snapshot-controller-7d9fbc56b8-2fbw8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 snapshot-controller-7d9fbc56b8-lls6s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  local-path-storage          local-path-provisioner-648f6765c9-5wm5f     0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-xvtxb              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 91s   kube-proxy       
	  Normal  Starting                 98s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s   kubelet          Node addons-962100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s   kubelet          Node addons-962100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s   kubelet          Node addons-962100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           94s   node-controller  Node addons-962100 event: Registered Node addons-962100 in Controller
	  Normal  NodeReady                51s   kubelet          Node addons-962100 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001892] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.402703] i8042: Warning: Keylock active
	[  +0.013055] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.491250] block sda: the capability attribute has been deprecated.
	[  +0.081417] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024229] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.472063] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2] <==
	{"level":"warn","ts":"2025-11-24T08:29:27.420090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.426952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.433421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.442692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.448749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.455478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.461229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.467175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.473687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.481460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.487550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.507604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.514178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.520247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:27.562175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:38.562308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:29:38.570632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:30:04.955541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:30:04.962503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:30:04.974090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:30:04.980382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47078","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T08:30:24.673300Z","caller":"traceutil/trace.go:172","msg":"trace[504337096] transaction","detail":"{read_only:false; response_revision:982; number_of_response:1; }","duration":"133.138113ms","start":"2025-11-24T08:30:24.540146Z","end":"2025-11-24T08:30:24.673285Z","steps":["trace[504337096] 'process raft request'  (duration: 133.013116ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:30:49.117674Z","caller":"traceutil/trace.go:172","msg":"trace[1548160243] transaction","detail":"{read_only:false; response_revision:1154; number_of_response:1; }","duration":"114.314346ms","start":"2025-11-24T08:30:49.003343Z","end":"2025-11-24T08:30:49.117658Z","steps":["trace[1548160243] 'process raft request'  (duration: 114.159533ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:30:49.452412Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.038502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T08:30:49.452498Z","caller":"traceutil/trace.go:172","msg":"trace[197173428] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"151.168793ms","start":"2025-11-24T08:30:49.301314Z","end":"2025-11-24T08:30:49.452483Z","steps":["trace[197173428] 'range keys from in-memory index tree'  (duration: 150.862485ms)"],"step_count":1}
	
	
	==> gcp-auth [8ef7618202431d505d8b8ddaf32376364e37c9c96ea31b77ef7b58e16f648587] <==
	2025/11/24 08:30:51 GCP Auth Webhook started!
	2025/11/24 08:30:58 Ready to marshal response ...
	2025/11/24 08:30:58 Ready to write response ...
	2025/11/24 08:30:58 Ready to marshal response ...
	2025/11/24 08:30:58 Ready to write response ...
	2025/11/24 08:30:58 Ready to marshal response ...
	2025/11/24 08:30:58 Ready to write response ...
	
	
	==> kernel <==
	 08:31:08 up 13 min,  0 user,  load average: 1.49, 0.74, 0.29
	Linux addons-962100 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9] <==
	I1124 08:29:36.613381       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T08:29:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 08:29:36.827298       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 08:29:36.830441       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 08:29:36.830472       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 08:29:36.830595       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 08:30:06.828638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 08:30:06.828638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 08:30:06.828738       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 08:30:06.828913       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1124 08:30:08.331169       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 08:30:08.331196       1 metrics.go:72] Registering metrics
	I1124 08:30:08.331246       1 controller.go:711] "Syncing nftables rules"
	I1124 08:30:16.833186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:30:16.833221       1 main.go:301] handling current node
	I1124 08:30:26.827982       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:30:26.828020       1 main.go:301] handling current node
	I1124 08:30:36.828608       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:30:36.828641       1 main.go:301] handling current node
	I1124 08:30:46.827599       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:30:46.827652       1 main.go:301] handling current node
	I1124 08:30:56.828398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:30:56.828436       1 main.go:301] handling current node
	I1124 08:31:06.827565       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:31:06.827595       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 08:30:20.550059       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.214.124:443: connect: connection refused" logger="UnhandledError"
	E1124 08:30:20.551573       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.214.124:443: connect: connection refused" logger="UnhandledError"
	E1124 08:30:20.557854       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.214.124:443: connect: connection refused" logger="UnhandledError"
	E1124 08:30:20.579216       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.214.124:443: connect: connection refused" logger="UnhandledError"
	W1124 08:30:21.550303       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 08:30:21.550673       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1124 08:30:21.550767       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1124 08:30:21.550614       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 08:30:21.550962       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1124 08:30:21.552244       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1124 08:30:25.630835       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 08:30:25.630892       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 08:30:25.630930       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.214.124:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1124 08:30:25.641113       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 08:31:06.355192       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56656: use of closed network connection
	E1124 08:31:06.498593       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56682: use of closed network connection
	
	
	==> kube-controller-manager [b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a] <==
	I1124 08:29:34.937682       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 08:29:34.937757       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 08:29:34.937815       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 08:29:34.937849       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 08:29:34.938144       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 08:29:34.938147       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 08:29:34.938161       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 08:29:34.938230       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 08:29:34.938277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 08:29:34.939370       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 08:29:34.939469       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 08:29:34.939487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 08:29:34.940670       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 08:29:34.942841       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 08:29:34.946094       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 08:29:34.946113       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 08:29:34.956784       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 08:30:04.950172       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 08:30:04.950282       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 08:30:04.950322       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 08:30:04.964880       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 08:30:04.968816       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 08:30:05.050732       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 08:30:05.069249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 08:30:19.893479       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829] <==
	I1124 08:29:36.375635       1 server_linux.go:53] "Using iptables proxy"
	I1124 08:29:36.479434       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 08:29:36.580514       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 08:29:36.582789       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 08:29:36.584803       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:29:36.901185       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 08:29:36.901301       1 server_linux.go:132] "Using iptables Proxier"
	I1124 08:29:37.002081       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:29:37.030383       1 server.go:527] "Version info" version="v1.34.2"
	I1124 08:29:37.033169       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:29:37.122963       1 config.go:200] "Starting service config controller"
	I1124 08:29:37.122988       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:29:37.123043       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:29:37.123048       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:29:37.123062       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:29:37.123067       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:29:37.160348       1 config.go:309] "Starting node config controller"
	I1124 08:29:37.160432       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:29:37.160444       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 08:29:37.223405       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 08:29:37.223510       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 08:29:37.223544       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609] <==
	E1124 08:29:27.948525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 08:29:27.948607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 08:29:27.948601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 08:29:27.948786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 08:29:27.948809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 08:29:27.948850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 08:29:27.948872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 08:29:27.948897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 08:29:27.949004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 08:29:27.949018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 08:29:27.949066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 08:29:27.949072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 08:29:27.949109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 08:29:27.949207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 08:29:28.797253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 08:29:28.886733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 08:29:28.912829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 08:29:28.971618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 08:29:28.989591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 08:29:29.025616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 08:29:29.091487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 08:29:29.100577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 08:29:29.123483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 08:29:29.158858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1124 08:29:31.446280       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 08:30:37 addons-962100 kubelet[1276]: I1124 08:30:37.604546    1276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9344fd2fc045331fd5c53ff9c58c897c1a2f9825dfbaaeac87b633f3b4ecbca4"
	Nov 24 08:30:37 addons-962100 kubelet[1276]: I1124 08:30:37.604923    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cs5ww" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 08:30:37 addons-962100 kubelet[1276]: I1124 08:30:37.907310    1276 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzjxs\" (UniqueName: \"kubernetes.io/projected/b910eb79-1fd9-4c63-b581-b3405bf65e54-kube-api-access-rzjxs\") pod \"b910eb79-1fd9-4c63-b581-b3405bf65e54\" (UID: \"b910eb79-1fd9-4c63-b581-b3405bf65e54\") "
	Nov 24 08:30:37 addons-962100 kubelet[1276]: I1124 08:30:37.909947    1276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b910eb79-1fd9-4c63-b581-b3405bf65e54-kube-api-access-rzjxs" (OuterVolumeSpecName: "kube-api-access-rzjxs") pod "b910eb79-1fd9-4c63-b581-b3405bf65e54" (UID: "b910eb79-1fd9-4c63-b581-b3405bf65e54"). InnerVolumeSpecName "kube-api-access-rzjxs". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 24 08:30:38 addons-962100 kubelet[1276]: I1124 08:30:38.008387    1276 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzjxs\" (UniqueName: \"kubernetes.io/projected/b910eb79-1fd9-4c63-b581-b3405bf65e54-kube-api-access-rzjxs\") on node \"addons-962100\" DevicePath \"\""
	Nov 24 08:30:38 addons-962100 kubelet[1276]: I1124 08:30:38.609098    1276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db7458e7b6145c3f16dd9555a33f440021dd64927960e022921486241fc9d78d"
	Nov 24 08:30:39 addons-962100 kubelet[1276]: I1124 08:30:39.614158    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-mf4wk" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 08:30:40 addons-962100 kubelet[1276]: I1124 08:30:40.618911    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-mf4wk" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 08:30:41 addons-962100 kubelet[1276]: I1124 08:30:41.622817    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-p4gxl" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 08:30:41 addons-962100 kubelet[1276]: I1124 08:30:41.631454    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-mf4wk" podStartSLOduration=3.48794154 podStartE2EDuration="24.6314352s" podCreationTimestamp="2025-11-24 08:30:17 +0000 UTC" firstStartedPulling="2025-11-24 08:30:17.7630861 +0000 UTC m=+47.454278256" lastFinishedPulling="2025-11-24 08:30:38.906579768 +0000 UTC m=+68.597771916" observedRunningTime="2025-11-24 08:30:39.626276697 +0000 UTC m=+69.317468858" watchObservedRunningTime="2025-11-24 08:30:41.6314352 +0000 UTC m=+71.322627359"
	Nov 24 08:30:41 addons-962100 kubelet[1276]: I1124 08:30:41.632256    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-p4gxl" podStartSLOduration=1.043513163 podStartE2EDuration="24.632247437s" podCreationTimestamp="2025-11-24 08:30:17 +0000 UTC" firstStartedPulling="2025-11-24 08:30:17.848479367 +0000 UTC m=+47.539671516" lastFinishedPulling="2025-11-24 08:30:41.437213639 +0000 UTC m=+71.128405790" observedRunningTime="2025-11-24 08:30:41.631686589 +0000 UTC m=+71.322878744" watchObservedRunningTime="2025-11-24 08:30:41.632247437 +0000 UTC m=+71.323439594"
	Nov 24 08:30:42 addons-962100 kubelet[1276]: I1124 08:30:42.632010    1276 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-p4gxl" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 08:30:44 addons-962100 kubelet[1276]: I1124 08:30:44.654755    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-k7jjg" podStartSLOduration=65.521346773 podStartE2EDuration="1m7.654734433s" podCreationTimestamp="2025-11-24 08:29:37 +0000 UTC" firstStartedPulling="2025-11-24 08:30:42.013846142 +0000 UTC m=+71.705038288" lastFinishedPulling="2025-11-24 08:30:44.147233811 +0000 UTC m=+73.838425948" observedRunningTime="2025-11-24 08:30:44.654039619 +0000 UTC m=+74.345231801" watchObservedRunningTime="2025-11-24 08:30:44.654734433 +0000 UTC m=+74.345926591"
	Nov 24 08:30:45 addons-962100 kubelet[1276]: I1124 08:30:45.437177    1276 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 24 08:30:45 addons-962100 kubelet[1276]: I1124 08:30:45.437226    1276 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 24 08:30:47 addons-962100 kubelet[1276]: I1124 08:30:47.675562    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-lnrv4" podStartSLOduration=1.3154760140000001 podStartE2EDuration="30.675540405s" podCreationTimestamp="2025-11-24 08:30:17 +0000 UTC" firstStartedPulling="2025-11-24 08:30:17.763087376 +0000 UTC m=+47.454279528" lastFinishedPulling="2025-11-24 08:30:47.123151777 +0000 UTC m=+76.814343919" observedRunningTime="2025-11-24 08:30:47.673487193 +0000 UTC m=+77.364679391" watchObservedRunningTime="2025-11-24 08:30:47.675540405 +0000 UTC m=+77.366732562"
	Nov 24 08:30:49 addons-962100 kubelet[1276]: E1124 08:30:49.200134    1276 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 24 08:30:49 addons-962100 kubelet[1276]: E1124 08:30:49.200241    1276 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b414335f-6ab1-4647-b55f-282ed73c74ff-gcr-creds podName:b414335f-6ab1-4647-b55f-282ed73c74ff nodeName:}" failed. No retries permitted until 2025-11-24 08:31:21.200222076 +0000 UTC m=+110.891414228 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/b414335f-6ab1-4647-b55f-282ed73c74ff-gcr-creds") pod "registry-creds-764b6fb674-q7n9p" (UID: "b414335f-6ab1-4647-b55f-282ed73c74ff") : secret "registry-creds-gcr" not found
	Nov 24 08:30:51 addons-962100 kubelet[1276]: I1124 08:30:51.697513    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-s884b" podStartSLOduration=65.951186035 podStartE2EDuration="1m7.697489428s" podCreationTimestamp="2025-11-24 08:29:44 +0000 UTC" firstStartedPulling="2025-11-24 08:30:49.486192305 +0000 UTC m=+79.177384449" lastFinishedPulling="2025-11-24 08:30:51.232495684 +0000 UTC m=+80.923687842" observedRunningTime="2025-11-24 08:30:51.693393731 +0000 UTC m=+81.384585885" watchObservedRunningTime="2025-11-24 08:30:51.697489428 +0000 UTC m=+81.388681588"
	Nov 24 08:30:55 addons-962100 kubelet[1276]: I1124 08:30:55.706848    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-6jbv4" podStartSLOduration=72.98073098 podStartE2EDuration="1m18.70682552s" podCreationTimestamp="2025-11-24 08:29:37 +0000 UTC" firstStartedPulling="2025-11-24 08:30:49.491134047 +0000 UTC m=+79.182326198" lastFinishedPulling="2025-11-24 08:30:55.2172286 +0000 UTC m=+84.908420738" observedRunningTime="2025-11-24 08:30:55.705511866 +0000 UTC m=+85.396704036" watchObservedRunningTime="2025-11-24 08:30:55.70682552 +0000 UTC m=+85.398017679"
	Nov 24 08:30:58 addons-962100 kubelet[1276]: I1124 08:30:58.268679    1276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdjb8\" (UniqueName: \"kubernetes.io/projected/1c466647-19c8-4bd7-89da-2219f06ffc9a-kube-api-access-qdjb8\") pod \"busybox\" (UID: \"1c466647-19c8-4bd7-89da-2219f06ffc9a\") " pod="default/busybox"
	Nov 24 08:30:58 addons-962100 kubelet[1276]: I1124 08:30:58.268748    1276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1c466647-19c8-4bd7-89da-2219f06ffc9a-gcp-creds\") pod \"busybox\" (UID: \"1c466647-19c8-4bd7-89da-2219f06ffc9a\") " pod="default/busybox"
	Nov 24 08:31:00 addons-962100 kubelet[1276]: I1124 08:31:00.728322    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.549272911 podStartE2EDuration="2.728305031s" podCreationTimestamp="2025-11-24 08:30:58 +0000 UTC" firstStartedPulling="2025-11-24 08:30:58.538122761 +0000 UTC m=+88.229314898" lastFinishedPulling="2025-11-24 08:30:59.71715488 +0000 UTC m=+89.408347018" observedRunningTime="2025-11-24 08:31:00.727033984 +0000 UTC m=+90.418226142" watchObservedRunningTime="2025-11-24 08:31:00.728305031 +0000 UTC m=+90.419497188"
	Nov 24 08:31:08 addons-962100 kubelet[1276]: I1124 08:31:08.389444    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507a936b-4621-4edc-84da-76f39047b604" path="/var/lib/kubelet/pods/507a936b-4621-4edc-84da-76f39047b604/volumes"
	Nov 24 08:31:08 addons-962100 kubelet[1276]: I1124 08:31:08.389822    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b910eb79-1fd9-4c63-b581-b3405bf65e54" path="/var/lib/kubelet/pods/b910eb79-1fd9-4c63-b581-b3405bf65e54/volumes"
	
	
	==> storage-provisioner [dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49] <==
	W1124 08:30:43.952000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:45.955548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:45.959001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:47.961989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:47.965507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:49.968194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:49.971970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:51.976251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:51.981679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:53.985261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:53.989301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:55.991868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:55.995573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:57.999288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:30:58.003028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:31:00.005654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:31:00.010947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:31:02.013572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:31:02.017068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:31:04.019647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:31:04.023412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:31:06.026520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:31:06.029948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:31:08.033027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:31:08.036240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-962100 -n addons-962100
helpers_test.go:269: (dbg) Run:  kubectl --context addons-962100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-xv7ps ingress-nginx-admission-patch-kcqn2 registry-creds-764b6fb674-q7n9p
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-962100 describe pod ingress-nginx-admission-create-xv7ps ingress-nginx-admission-patch-kcqn2 registry-creds-764b6fb674-q7n9p
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-962100 describe pod ingress-nginx-admission-create-xv7ps ingress-nginx-admission-patch-kcqn2 registry-creds-764b6fb674-q7n9p: exit status 1 (59.549278ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xv7ps" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kcqn2" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-q7n9p" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-962100 describe pod ingress-nginx-admission-create-xv7ps ingress-nginx-admission-patch-kcqn2 registry-creds-764b6fb674-q7n9p: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable headlamp --alsologtostderr -v=1: exit status 11 (246.026218ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:09.105286   20132 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:09.105601   20132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:09.105613   20132 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:09.105617   20132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:09.105854   20132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:09.106207   20132 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:09.106587   20132 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:09.106603   20132 addons.go:622] checking whether the cluster is paused
	I1124 08:31:09.106684   20132 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:09.106700   20132 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:09.107035   20132 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:09.125179   20132 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:09.125227   20132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:09.144765   20132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:09.243919   20132 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:09.243989   20132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:09.273474   20132 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:09.273496   20132 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:09.273501   20132 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:09.273505   20132 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:09.273509   20132 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:09.273513   20132 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:09.273516   20132 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:09.273519   20132 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:09.273521   20132 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:09.273533   20132 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:09.273536   20132 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:09.273539   20132 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:09.273542   20132 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:09.273545   20132 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:09.273548   20132 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:09.273556   20132 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:09.273561   20132 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:09.273565   20132 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:09.273568   20132 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:09.273571   20132 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:09.273574   20132 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:09.273576   20132 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:09.273579   20132 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:09.273582   20132 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:09.273587   20132 cri.go:89] found id: ""
	I1124 08:31:09.273631   20132 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:09.287299   20132 out.go:203] 
	W1124 08:31:09.288737   20132 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:09.288756   20132 out.go:285] * 
	* 
	W1124 08:31:09.291626   20132 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:09.292814   20132 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-qhv6q" [374d22d3-21cb-48cb-ab5f-e90aca7451ba] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004128042s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (474.387632ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:26.339668   22046 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:26.339806   22046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:26.339814   22046 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:26.339821   22046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:26.340137   22046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:26.340485   22046 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:26.340950   22046 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:26.340972   22046 addons.go:622] checking whether the cluster is paused
	I1124 08:31:26.341107   22046 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:26.341146   22046 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:26.341733   22046 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:26.366796   22046 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:26.366864   22046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:26.394167   22046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:26.501508   22046 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:26.501617   22046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:26.537003   22046 cri.go:89] found id: "91d297154e6dda1e2f052e15ea1a4f8f73e3907171575a40ea567f89618d4b96"
	I1124 08:31:26.537026   22046 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:26.537033   22046 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:26.537038   22046 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:26.537043   22046 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:26.537049   22046 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:26.537054   22046 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:26.537059   22046 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:26.537064   22046 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:26.537083   22046 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:26.537091   22046 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:26.537096   22046 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:26.537104   22046 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:26.537109   22046 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:26.537116   22046 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:26.537127   22046 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:26.537135   22046 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:26.537141   22046 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:26.537145   22046 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:26.537150   22046 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:26.537157   22046 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:26.537162   22046 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:26.537166   22046 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:26.537170   22046 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:26.537175   22046 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:26.537179   22046 cri.go:89] found id: ""
	I1124 08:31:26.537226   22046 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:26.570224   22046 out.go:203] 
	W1124 08:31:26.573844   22046 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:26.573864   22046 out.go:285] * 
	* 
	W1124 08:31:26.578777   22046 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:26.662436   22046 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-962100 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-962100 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c972d1f3-0b0f-4790-952b-4f74d0db09dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c972d1f3-0b0f-4790-952b-4f74d0db09dd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c972d1f3-0b0f-4790-952b-4f74d0db09dd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003337202s
addons_test.go:967: (dbg) Run:  kubectl --context addons-962100 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 ssh "cat /opt/local-path-provisioner/pvc-ce2de511-5f70-4830-81a6-055c004c75bd_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-962100 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-962100 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (248.324115ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:28.190176   22417 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:28.190452   22417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:28.190461   22417 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:28.190466   22417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:28.190636   22417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:28.190880   22417 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:28.191184   22417 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:28.191197   22417 addons.go:622] checking whether the cluster is paused
	I1124 08:31:28.191275   22417 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:28.191306   22417 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:28.191672   22417 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:28.211324   22417 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:28.211397   22417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:28.228860   22417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:28.329051   22417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:28.329132   22417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:28.357812   22417 cri.go:89] found id: "91d297154e6dda1e2f052e15ea1a4f8f73e3907171575a40ea567f89618d4b96"
	I1124 08:31:28.357841   22417 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:28.357845   22417 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:28.357848   22417 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:28.357851   22417 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:28.357855   22417 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:28.357857   22417 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:28.357860   22417 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:28.357863   22417 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:28.357877   22417 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:28.357883   22417 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:28.357886   22417 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:28.357888   22417 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:28.357891   22417 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:28.357894   22417 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:28.357901   22417 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:28.357906   22417 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:28.357910   22417 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:28.357913   22417 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:28.357916   22417 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:28.357919   22417 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:28.357921   22417 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:28.357924   22417 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:28.357926   22417 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:28.357929   22417 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:28.357932   22417 cri.go:89] found id: ""
	I1124 08:31:28.357981   22417 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:28.372522   22417 out.go:203] 
	W1124 08:31:28.376532   22417 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:28.376560   22417 out.go:285] * 
	* 
	W1124 08:31:28.381096   22417 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:28.382557   22417 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-mf4wk" [f95b9be0-530d-43d3-bfc1-ea916925dd2c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003251551s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (245.64459ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:12.832666   20318 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:12.832794   20318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:12.832802   20318 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:12.832806   20318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:12.832965   20318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:12.833236   20318 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:12.833563   20318 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:12.833577   20318 addons.go:622] checking whether the cluster is paused
	I1124 08:31:12.833652   20318 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:12.833667   20318 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:12.834033   20318 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:12.851573   20318 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:12.851643   20318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:12.869475   20318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:12.970764   20318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:12.970838   20318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:13.000462   20318 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:13.000483   20318 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:13.000487   20318 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:13.000491   20318 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:13.000494   20318 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:13.000497   20318 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:13.000500   20318 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:13.000503   20318 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:13.000506   20318 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:13.000511   20318 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:13.000514   20318 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:13.000524   20318 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:13.000531   20318 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:13.000536   20318 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:13.000554   20318 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:13.000572   20318 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:13.000579   20318 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:13.000583   20318 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:13.000586   20318 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:13.000588   20318 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:13.000594   20318 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:13.000596   20318 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:13.000599   20318 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:13.000601   20318 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:13.000604   20318 cri.go:89] found id: ""
	I1124 08:31:13.000649   20318 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:13.014255   20318 out.go:203] 
	W1124 08:31:13.015482   20318 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:13.015499   20318 out.go:285] * 
	* 
	W1124 08:31:13.018440   20318 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:13.019751   20318 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xvtxb" [f012bb4b-6952-4ced-b659-e895727e69da] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00343121s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable yakd --alsologtostderr -v=1: exit status 11 (243.179862ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:17.139172   20977 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:17.139456   20977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:17.139465   20977 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:17.139469   20977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:17.139678   20977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:17.139944   20977 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:17.140289   20977 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:17.140306   20977 addons.go:622] checking whether the cluster is paused
	I1124 08:31:17.140399   20977 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:17.140414   20977 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:17.140761   20977 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:17.158427   20977 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:17.158489   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:17.176206   20977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:17.275728   20977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:17.275842   20977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:17.303666   20977 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:17.303684   20977 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:17.303688   20977 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:17.303691   20977 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:17.303695   20977 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:17.303698   20977 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:17.303700   20977 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:17.303710   20977 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:17.303713   20977 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:17.303718   20977 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:17.303725   20977 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:17.303729   20977 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:17.303732   20977 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:17.303735   20977 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:17.303738   20977 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:17.303753   20977 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:17.303760   20977 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:17.303764   20977 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:17.303767   20977 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:17.303769   20977 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:17.303772   20977 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:17.303775   20977 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:17.303777   20977 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:17.303780   20977 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:17.303783   20977 cri.go:89] found id: ""
	I1124 08:31:17.303819   20977 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:17.319354   20977 out.go:203] 
	W1124 08:31:17.320491   20977 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:17.320506   20977 out.go:285] * 
	* 
	W1124 08:31:17.323402   20977 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:17.324672   20977 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-cs5ww" [04f59c85-61cd-40b0-8427-163315da0b5b] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003254833s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-962100 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962100 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (288.155427ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:31:14.369626   20525 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:31:14.370050   20525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:14.370063   20525 out.go:374] Setting ErrFile to fd 2...
	I1124 08:31:14.370068   20525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:31:14.370469   20525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:31:14.370910   20525 mustload.go:66] Loading cluster: addons-962100
	I1124 08:31:14.371471   20525 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:14.371493   20525 addons.go:622] checking whether the cluster is paused
	I1124 08:31:14.371629   20525 config.go:182] Loaded profile config "addons-962100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:31:14.371653   20525 host.go:66] Checking if "addons-962100" exists ...
	I1124 08:31:14.372188   20525 cli_runner.go:164] Run: docker container inspect addons-962100 --format={{.State.Status}}
	I1124 08:31:14.395465   20525 ssh_runner.go:195] Run: systemctl --version
	I1124 08:31:14.395511   20525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-962100
	I1124 08:31:14.415070   20525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/addons-962100/id_rsa Username:docker}
	I1124 08:31:14.522980   20525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:31:14.523049   20525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:31:14.558126   20525 cri.go:89] found id: "0b6bed5093f7a4972477625e4862b1c6d22baadd56839f9a62206675a25e1480"
	I1124 08:31:14.558163   20525 cri.go:89] found id: "4b4e46a4d1356329738b6935cf3f2cda74a6ee9f2d62de4d0edc978e525c7dc5"
	I1124 08:31:14.558169   20525 cri.go:89] found id: "ab4c77cb74d98989b90212f9a7351f2e8d7ec0360fd94fdd688c51ba235bc13c"
	I1124 08:31:14.558174   20525 cri.go:89] found id: "9c0b3f7c96a763e0f198a637e44672928c6f9e9a0681f0242f7ce72d89aedee8"
	I1124 08:31:14.558179   20525 cri.go:89] found id: "fe44815b0c642cf087a376b9ca4b0a77a1809df468aebcee1e7c1768fea94595"
	I1124 08:31:14.558184   20525 cri.go:89] found id: "7ebc78750ca5139ccc50c75dcec85e74c7546ad55f982847a2d2de1f5e70e3a4"
	I1124 08:31:14.558188   20525 cri.go:89] found id: "b825e1d2b115c025fa524b18ebe2302d2704b14b8709d11e71c863ef4ff16efa"
	I1124 08:31:14.558192   20525 cri.go:89] found id: "8bbd9f289c92c741f32073cf14fc7881325c2d4910530e3331abc146e0a1bcbb"
	I1124 08:31:14.558196   20525 cri.go:89] found id: "5c530c99eae1be5b5c94314bb792e6a29276eab2b18b64189e7a4a3f5299ffd5"
	I1124 08:31:14.558204   20525 cri.go:89] found id: "648f5560e5b98eaa710529b185d8442284e45a0dd14863f9144b54f21a6c6f9f"
	I1124 08:31:14.558217   20525 cri.go:89] found id: "c0296faa52b98ac391371a5b51e226536341a6cee75a9038676217f5db193e20"
	I1124 08:31:14.558221   20525 cri.go:89] found id: "6d6152975c279dc9c841214a514cf912f6a7cf9e1795083bec0f783804569a8f"
	I1124 08:31:14.558225   20525 cri.go:89] found id: "9808d8d748e8693d84bf9aba11f3cc09be458c92fa6176e958c76cf791cbbae9"
	I1124 08:31:14.558230   20525 cri.go:89] found id: "e5b3bc6f75f2bd152398896181f773a2ba55a628115186deab89512ed3d7f481"
	I1124 08:31:14.558235   20525 cri.go:89] found id: "f40ff74c2839e18d7d185f3242c5add8eac9f8347235210b62dbcfd135fe08ec"
	I1124 08:31:14.558242   20525 cri.go:89] found id: "f6f66d85b0739818a01ecf62a0b9d3e7f7a22f8b98fd290b69192274ce033680"
	I1124 08:31:14.558247   20525 cri.go:89] found id: "57a5df478ca2062bd7d11cc584c819c3dd12560580f7bbf32d9069029e1ffd50"
	I1124 08:31:14.558253   20525 cri.go:89] found id: "dedc546cfa8d1ebdc5378116f7d35adb988d29c5297cdea6d2bc68d4f1fa1e49"
	I1124 08:31:14.558257   20525 cri.go:89] found id: "c2361bae81167985f0ffd34478ca4d134ced72d2286b8b99f9d0ea5672069dd9"
	I1124 08:31:14.558261   20525 cri.go:89] found id: "c3e272d2f60e05001ba225022af06f8b78fe309341226272efa4ba723be2c829"
	I1124 08:31:14.558265   20525 cri.go:89] found id: "4d0e2042b8500074d01bc7866ed71ff41c6fae3a75665bef0a7cb185308d0ef6"
	I1124 08:31:14.558269   20525 cri.go:89] found id: "a0cbba27959f4eba73c4fb61b81af0733e698e8dab9ae360d2e311bd296c85f2"
	I1124 08:31:14.558277   20525 cri.go:89] found id: "b758fa8074d4497ee3df87033fb85dc0443fedb8b986525523a557292b1c369a"
	I1124 08:31:14.558282   20525 cri.go:89] found id: "5d9b85005d8ee4e39ad1614f9e0404a361a2cfe30d90657758d0f20b26c4d609"
	I1124 08:31:14.558305   20525 cri.go:89] found id: ""
	I1124 08:31:14.558385   20525 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 08:31:14.576963   20525 out.go:203] 
	W1124 08:31:14.578252   20525 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:31:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 08:31:14.578277   20525 out.go:285] * 
	* 
	W1124 08:31:14.582914   20525 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 08:31:14.584422   20525 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-962100 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-683533 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-683533 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-qfsn9" [9c55ad1f-de3b-4818-91bb-442508e98c52] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-683533 -n functional-683533
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-24 08:47:19.922928127 +0000 UTC m=+1125.983652055
functional_test.go:1645: (dbg) Run:  kubectl --context functional-683533 describe po hello-node-connect-7d85dfc575-qfsn9 -n default
functional_test.go:1645: (dbg) kubectl --context functional-683533 describe po hello-node-connect-7d85dfc575-qfsn9 -n default:
Name:             hello-node-connect-7d85dfc575-qfsn9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-683533/192.168.49.2
Start Time:       Mon, 24 Nov 2025 08:37:19 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jwxjm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jwxjm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qfsn9 to functional-683533
Normal   Pulling    7m7s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-683533 logs hello-node-connect-7d85dfc575-qfsn9 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-683533 logs hello-node-connect-7d85dfc575-qfsn9 -n default: exit status 1 (63.11259ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-qfsn9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-683533 logs hello-node-connect-7d85dfc575-qfsn9 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-683533 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-qfsn9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-683533/192.168.49.2
Start Time:       Mon, 24 Nov 2025 08:37:19 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jwxjm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jwxjm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qfsn9 to functional-683533
Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-683533 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-683533 logs -l app=hello-node-connect: exit status 1 (60.696538ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-qfsn9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-683533 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-683533 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.65.129
IPs:                      10.103.65.129
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32081/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-683533
helpers_test.go:243: (dbg) docker inspect functional-683533:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bfbeaf731234bd8a2a846f224799f9e200b32a3d06bd41235ebdd8888b048b5",
	        "Created": "2025-11-24T08:34:58.640230054Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T08:34:58.68202862Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/2bfbeaf731234bd8a2a846f224799f9e200b32a3d06bd41235ebdd8888b048b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bfbeaf731234bd8a2a846f224799f9e200b32a3d06bd41235ebdd8888b048b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bfbeaf731234bd8a2a846f224799f9e200b32a3d06bd41235ebdd8888b048b5/hosts",
	        "LogPath": "/var/lib/docker/containers/2bfbeaf731234bd8a2a846f224799f9e200b32a3d06bd41235ebdd8888b048b5/2bfbeaf731234bd8a2a846f224799f9e200b32a3d06bd41235ebdd8888b048b5-json.log",
	        "Name": "/functional-683533",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-683533:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-683533",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2bfbeaf731234bd8a2a846f224799f9e200b32a3d06bd41235ebdd8888b048b5",
	                "LowerDir": "/var/lib/docker/overlay2/1acd109188370a8202282bc099b2d76acb8ce4161034bbc6fe9ade9a122c8224-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1acd109188370a8202282bc099b2d76acb8ce4161034bbc6fe9ade9a122c8224/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1acd109188370a8202282bc099b2d76acb8ce4161034bbc6fe9ade9a122c8224/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1acd109188370a8202282bc099b2d76acb8ce4161034bbc6fe9ade9a122c8224/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-683533",
	                "Source": "/var/lib/docker/volumes/functional-683533/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-683533",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-683533",
	                "name.minikube.sigs.k8s.io": "functional-683533",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "53fe9ec015ad98e0f34c88ecd8f0520ca2f128c371de998361a0c8bd1913c9fb",
	            "SandboxKey": "/var/run/docker/netns/53fe9ec015ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-683533": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e5241384e379318d35ac55866a501aa4e5349c69b9b398866023017842878a4",
	                    "EndpointID": "f2afa4c1abbd447af81edd08c4bbb57daae25a080ab91a957ed6e6d9ae93e835",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "5e:8a:bd:8c:9e:d6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-683533",
	                        "2bfbeaf73123"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-683533 -n functional-683533
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-683533 logs -n 25: (1.244797388s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-683533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2743143260/001:/mount3 --alsologtostderr -v=1 │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │                     │
	│ mount          │ -p functional-683533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2743143260/001:/mount1 --alsologtostderr -v=1 │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │                     │
	│ start          │ -p functional-683533 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │                     │
	│ start          │ -p functional-683533 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │                     │
	│ start          │ -p functional-683533 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │                     │
	│ ssh            │ functional-683533 ssh findmnt -T /mount1                                                                           │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ dashboard      │ --url --port 36195 -p functional-683533 --alsologtostderr -v=1                                                     │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ ssh            │ functional-683533 ssh findmnt -T /mount2                                                                           │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ ssh            │ functional-683533 ssh findmnt -T /mount3                                                                           │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ mount          │ -p functional-683533 --kill=true                                                                                   │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │                     │
	│ update-context │ functional-683533 update-context --alsologtostderr -v=2                                                            │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ update-context │ functional-683533 update-context --alsologtostderr -v=2                                                            │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ update-context │ functional-683533 update-context --alsologtostderr -v=2                                                            │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ image          │ functional-683533 image ls --format short --alsologtostderr                                                        │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ image          │ functional-683533 image ls --format yaml --alsologtostderr                                                         │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ ssh            │ functional-683533 ssh pgrep buildkitd                                                                              │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │                     │
	│ image          │ functional-683533 image build -t localhost/my-image:functional-683533 testdata/build --alsologtostderr             │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ image          │ functional-683533 image ls --format table --alsologtostderr                                                        │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ image          │ functional-683533 image ls --format json --alsologtostderr                                                         │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ image          │ functional-683533 image ls                                                                                         │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:37 UTC │ 24 Nov 25 08:37 UTC │
	│ service        │ functional-683533 service list                                                                                     │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:47 UTC │ 24 Nov 25 08:47 UTC │
	│ service        │ functional-683533 service list -o json                                                                             │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:47 UTC │ 24 Nov 25 08:47 UTC │
	│ service        │ functional-683533 service --namespace=default --https --url hello-node                                             │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:47 UTC │                     │
	│ service        │ functional-683533 service hello-node --url --format={{.IP}}                                                        │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:47 UTC │                     │
	│ service        │ functional-683533 service hello-node --url                                                                         │ functional-683533 │ jenkins │ v1.37.0 │ 24 Nov 25 08:47 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:37:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:37:27.117391   48877 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:37:27.117656   48877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:27.117666   48877 out.go:374] Setting ErrFile to fd 2...
	I1124 08:37:27.117670   48877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:27.117940   48877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:37:27.118386   48877 out.go:368] Setting JSON to false
	I1124 08:37:27.119484   48877 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1193,"bootTime":1763972254,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:37:27.119536   48877 start.go:143] virtualization: kvm guest
	I1124 08:37:27.121083   48877 out.go:179] * [functional-683533] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 08:37:27.122144   48877 notify.go:221] Checking for updates...
	I1124 08:37:27.122172   48877 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:37:27.123241   48877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:37:27.124451   48877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:37:27.125606   48877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:37:27.126725   48877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:37:27.127798   48877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:37:27.129207   48877 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:37:27.129728   48877 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:37:27.153642   48877 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:37:27.153733   48877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:37:27.213884   48877 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:37:27.203315817 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:37:27.213996   48877 docker.go:319] overlay module found
	I1124 08:37:27.215495   48877 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 08:37:27.216498   48877 start.go:309] selected driver: docker
	I1124 08:37:27.216511   48877 start.go:927] validating driver "docker" against &{Name:functional-683533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-683533 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:37:27.216585   48877 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:37:27.218161   48877 out.go:203] 
	W1124 08:37:27.219302   48877 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 08:37:27.220753   48877 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 08:37:31 functional-683533 crio[3605]: time="2025-11-24T08:37:31.980067573Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a" id=3aabe9c7-717b-4b98-a536-eee8297a2263 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:37:31 functional-683533 crio[3605]: time="2025-11-24T08:37:31.980714772Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ee842d8c-f620-4fa6-bbbf-10ea93cb12b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:37:31 functional-683533 crio[3605]: time="2025-11-24T08:37:31.982264506Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=d6f1de42-08df-45af-b0bc-f313b539a3da name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:37:31 functional-683533 crio[3605]: time="2025-11-24T08:37:31.988687539Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pjmds/dashboard-metrics-scraper" id=7ed4635c-9c40-446a-8ff8-7dfafce3ade5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 08:37:31 functional-683533 crio[3605]: time="2025-11-24T08:37:31.988813474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:37:31 functional-683533 crio[3605]: time="2025-11-24T08:37:31.992829711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:37:31 functional-683533 crio[3605]: time="2025-11-24T08:37:31.992988933Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a489908e23d78f31b685039e7d821773be7bad55fde8b2207f968b6c69155e3f/merged/etc/group: no such file or directory"
	Nov 24 08:37:31 functional-683533 crio[3605]: time="2025-11-24T08:37:31.993302424Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 08:37:32 functional-683533 crio[3605]: time="2025-11-24T08:37:32.020584258Z" level=info msg="Created container fcdf7d313f21e6ac861a6e45fc9a1e3c171b5185ce800e14f21c66bdc68996b1: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pjmds/dashboard-metrics-scraper" id=7ed4635c-9c40-446a-8ff8-7dfafce3ade5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 08:37:32 functional-683533 crio[3605]: time="2025-11-24T08:37:32.021325082Z" level=info msg="Starting container: fcdf7d313f21e6ac861a6e45fc9a1e3c171b5185ce800e14f21c66bdc68996b1" id=afa8f5a9-4853-40db-9dda-508989c3005b name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 08:37:32 functional-683533 crio[3605]: time="2025-11-24T08:37:32.023073242Z" level=info msg="Started container" PID=7781 containerID=fcdf7d313f21e6ac861a6e45fc9a1e3c171b5185ce800e14f21c66bdc68996b1 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pjmds/dashboard-metrics-scraper id=afa8f5a9-4853-40db-9dda-508989c3005b name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6018a9750ddf7238a92d17753aa4b19063a1df416eade2ac7ab388e41c52b09
	Nov 24 08:37:33 functional-683533 crio[3605]: time="2025-11-24T08:37:33.245621763Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5623c05c-f703-4165-ae96-87bc0110b424 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:37:38 functional-683533 crio[3605]: time="2025-11-24T08:37:38.245675933Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c45c528a-e23f-4aa3-8e81-40769b9048ff name=/runtime.v1.ImageService/PullImage
	Nov 24 08:37:58 functional-683533 crio[3605]: time="2025-11-24T08:37:58.246032478Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=952b6e9b-5717-4b6d-8f1d-45add26aa2af name=/runtime.v1.ImageService/PullImage
	Nov 24 08:38:09 functional-683533 crio[3605]: time="2025-11-24T08:38:09.243380898Z" level=info msg="Stopping pod sandbox: 329a23591657aac10e0c4b52377b508e34a7757d9627c88e60620ab9fd0b6330" id=65b086d3-8ff9-4b75-a488-f755315cd692 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 08:38:09 functional-683533 crio[3605]: time="2025-11-24T08:38:09.243438889Z" level=info msg="Stopped pod sandbox (already stopped): 329a23591657aac10e0c4b52377b508e34a7757d9627c88e60620ab9fd0b6330" id=65b086d3-8ff9-4b75-a488-f755315cd692 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 08:38:09 functional-683533 crio[3605]: time="2025-11-24T08:38:09.243756086Z" level=info msg="Removing pod sandbox: 329a23591657aac10e0c4b52377b508e34a7757d9627c88e60620ab9fd0b6330" id=72bc1edc-23b9-4d72-b3ca-0d0b92b56df4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 08:38:09 functional-683533 crio[3605]: time="2025-11-24T08:38:09.24717823Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 08:38:09 functional-683533 crio[3605]: time="2025-11-24T08:38:09.247248219Z" level=info msg="Removed pod sandbox: 329a23591657aac10e0c4b52377b508e34a7757d9627c88e60620ab9fd0b6330" id=72bc1edc-23b9-4d72-b3ca-0d0b92b56df4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 08:38:28 functional-683533 crio[3605]: time="2025-11-24T08:38:28.246150754Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e17f5ec0-e5e5-47d5-9f8d-f1e128a74799 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:38:44 functional-683533 crio[3605]: time="2025-11-24T08:38:44.245885287Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=10a737b3-2279-4cf8-9c19-77bffb5f7ee6 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:39:51 functional-683533 crio[3605]: time="2025-11-24T08:39:51.245625379Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fd742bec-5b5c-4ec3-ad46-4d267d3b1792 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:40:12 functional-683533 crio[3605]: time="2025-11-24T08:40:12.245487841Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3c8ebd04-4402-4c00-9f81-3bdd75b3bde2 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:42:39 functional-683533 crio[3605]: time="2025-11-24T08:42:39.246654238Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=16345ba2-51c7-45f4-b62f-a9837bbccf47 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:42:53 functional-683533 crio[3605]: time="2025-11-24T08:42:53.246474044Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=38b3bed4-a069-42e1-a88f-adf9cfdef314 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	fcdf7d313f21e       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   d6018a9750ddf       dashboard-metrics-scraper-77bf4d6c4c-pjmds   kubernetes-dashboard
	8a97f709581a6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   db208fc6773d3       kubernetes-dashboard-855c9754f9-x8lzz        kubernetes-dashboard
	14b3e495c1221       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   fcf82a78c6641       busybox-mount                                default
	c42f09b22e98b       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  10 minutes ago      Running             myfrontend                  0                   c82ec0fc8852c       sp-pod                                       default
	11383dfabdbf1       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   b6ca331b81447       nginx-svc                                    default
	aafd20b581b06       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   675286d26bf55       mysql-5bb876957f-2w68h                       default
	c00f049005125       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   be08a69ff3f94       storage-provisioner                          kube-system
	5cf34ba706739       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 10 minutes ago      Running             kube-controller-manager     2                   ace8b0134e0be       kube-controller-manager-functional-683533    kube-system
	9e3dd59cac37a       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                 10 minutes ago      Running             kube-apiserver              0                   2de2b2c2d9184       kube-apiserver-functional-683533             kube-system
	d0a59425a59ee       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 10 minutes ago      Running             etcd                        1                   b1d4072bd9fe0       etcd-functional-683533                       kube-system
	26a2bc0addb00       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 11 minutes ago      Running             kube-proxy                  1                   b8661b774f10e       kube-proxy-bgw8m                             kube-system
	f213cd538f0d3       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 11 minutes ago      Running             kube-scheduler              1                   8e95d14736520       kube-scheduler-functional-683533             kube-system
	02a1f325e1d9b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 11 minutes ago      Exited              kube-controller-manager     1                   ace8b0134e0be       kube-controller-manager-functional-683533    kube-system
	45a1fc97c037e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   be08a69ff3f94       storage-provisioner                          kube-system
	e3c82cc38fba7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   be067f9fd2b8d       kindnet-tdn89                                kube-system
	43d14729a347f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   a1e71472476dc       coredns-66bc5c9577-gxqx6                     kube-system
	f6733bbca240a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   a1e71472476dc       coredns-66bc5c9577-gxqx6                     kube-system
	bf17c6b7dc818       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   be067f9fd2b8d       kindnet-tdn89                                kube-system
	3156d167def7f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 12 minutes ago      Exited              kube-proxy                  0                   b8661b774f10e       kube-proxy-bgw8m                             kube-system
	459c0e553cb89       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 12 minutes ago      Exited              kube-scheduler              0                   8e95d14736520       kube-scheduler-functional-683533             kube-system
	5cceb75debf41       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 12 minutes ago      Exited              etcd                        0                   b1d4072bd9fe0       etcd-functional-683533                       kube-system
	
	
	==> coredns [43d14729a347fd6de67fddce3bcfa05f982f28b60a2a0a319d8282e9735739d7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39732 - 18024 "HINFO IN 8678620387334048489.5924103935536584481. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.081356251s
	
	
	==> coredns [f6733bbca240a28c4a4289a8b9b9630cae5885a55083e494ebbaf1d00f27c303] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44152 - 9268 "HINFO IN 4637688022538060990.6316513620776240273. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045297511s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-683533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-683533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=functional-683533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T08_35_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 08:35:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-683533
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 08:47:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 08:46:33 +0000   Mon, 24 Nov 2025 08:35:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 08:46:33 +0000   Mon, 24 Nov 2025 08:35:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 08:46:33 +0000   Mon, 24 Nov 2025 08:35:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 08:46:33 +0000   Mon, 24 Nov 2025 08:35:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-683533
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                12e3418b-85a2-4a48-a9bc-237e33f0580e
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-g5zbn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-qfsn9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-2w68h                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-gxqx6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-683533                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-tdn89                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-683533              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-683533     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bgw8m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-683533              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-pjmds    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-x8lzz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-683533 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-683533 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-683533 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-683533 event: Registered Node functional-683533 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-683533 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x9 over 11m)  kubelet          Node functional-683533 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-683533 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-683533 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-683533 event: Registered Node functional-683533 in Controller
	
	
	==> dmesg <==
	[  +0.081417] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024229] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.472063] kauditd_printk_skb: 47 callbacks suppressed
	[Nov24 08:31] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.027365] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.023898] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.024840] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.022897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +4.031610] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +8.191119] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[ +16.382253] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 08:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	
	
	==> etcd [5cceb75debf417236e6dc9d9d86372e8163fc79fca5b004943e106498136872f] <==
	{"level":"warn","ts":"2025-11-24T08:35:10.459448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:35:10.465864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:35:10.478810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:35:10.492376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:35:10.499724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:35:10.507035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:35:10.551010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44538","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T08:36:07.698299Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T08:36:07.698407Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-683533","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T08:36:07.698521Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T08:36:07.700179Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T08:36:07.700252Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T08:36:07.700275Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-24T08:36:07.700302Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T08:36:07.700298Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-24T08:36:07.700380Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T08:36:07.700436Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T08:36:07.700447Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T08:36:07.700374Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T08:36:07.700466Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T08:36:07.700477Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T08:36:07.702833Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T08:36:07.702911Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T08:36:07.702946Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T08:36:07.702963Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-683533","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d0a59425a59eeee42608e5e235b57c6a49b72186609e9d25f99bc25728f2a6a5] <==
	{"level":"warn","ts":"2025-11-24T08:36:30.763433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.769573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.775838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.782470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.792509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.799889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.807064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.813804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.821916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.829249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.835937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.842289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.849311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.857022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.863876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.872268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.889110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.896530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.902867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:36:30.949504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59224","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T08:37:17.929840Z","caller":"traceutil/trace.go:172","msg":"trace[1487124520] transaction","detail":"{read_only:false; response_revision:692; number_of_response:1; }","duration":"118.744947ms","start":"2025-11-24T08:37:17.811074Z","end":"2025-11-24T08:37:17.929819Z","steps":["trace[1487124520] 'process raft request'  (duration: 78.423248ms)","trace[1487124520] 'compare'  (duration: 40.231349ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T08:37:17.934002Z","caller":"traceutil/trace.go:172","msg":"trace[21401593] transaction","detail":"{read_only:false; response_revision:693; number_of_response:1; }","duration":"117.783293ms","start":"2025-11-24T08:37:17.816196Z","end":"2025-11-24T08:37:17.933979Z","steps":["trace[21401593] 'process raft request'  (duration: 117.593459ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:46:30.452921Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1125}
	{"level":"info","ts":"2025-11-24T08:46:30.472062Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1125,"took":"18.822814ms","hash":3215152112,"current-db-size-bytes":3489792,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1589248,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-24T08:46:30.472113Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3215152112,"revision":1125,"compact-revision":-1}
	
	
	==> kernel <==
	 08:47:21 up 29 min,  0 user,  load average: 0.49, 0.24, 0.24
	Linux functional-683533 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bf17c6b7dc818dde147622b9f997d961e8352412646a484929fede46948d1bdc] <==
	I1124 08:35:19.563516       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 08:35:19.656455       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 08:35:19.656611       1 main.go:148] setting mtu 1500 for CNI 
	I1124 08:35:19.656632       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 08:35:19.656648       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T08:35:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 08:35:19.763157       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 08:35:19.763236       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 08:35:19.763247       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 08:35:19.857169       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 08:35:20.234925       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 08:35:20.234949       1 metrics.go:72] Registering metrics
	I1124 08:35:20.235009       1 controller.go:711] "Syncing nftables rules"
	I1124 08:35:29.763823       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:35:29.763885       1 main.go:301] handling current node
	I1124 08:35:39.767782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:35:39.767822       1 main.go:301] handling current node
	I1124 08:35:49.763831       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:35:49.763891       1 main.go:301] handling current node
	
	
	==> kindnet [e3c82cc38fba73f91223b1f1e01a5b50414df17b5db8e1d9127aa0aec4f97f96] <==
	I1124 08:45:18.063215       1 main.go:301] handling current node
	I1124 08:45:28.062176       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:45:28.062218       1 main.go:301] handling current node
	I1124 08:45:38.063709       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:45:38.063761       1 main.go:301] handling current node
	I1124 08:45:48.062765       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:45:48.062807       1 main.go:301] handling current node
	I1124 08:45:58.067409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:45:58.067443       1 main.go:301] handling current node
	I1124 08:46:08.066527       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:46:08.066593       1 main.go:301] handling current node
	I1124 08:46:18.070329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:46:18.070409       1 main.go:301] handling current node
	I1124 08:46:28.066857       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:46:28.066904       1 main.go:301] handling current node
	I1124 08:46:38.062424       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:46:38.062461       1 main.go:301] handling current node
	I1124 08:46:48.071499       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:46:48.071537       1 main.go:301] handling current node
	I1124 08:46:58.062108       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:46:58.062152       1 main.go:301] handling current node
	I1124 08:47:08.066356       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:47:08.066394       1 main.go:301] handling current node
	I1124 08:47:18.062777       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:47:18.062830       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9e3dd59cac37a3080dde2209578c6a89e6957afee757fc3d0753930eeea709d0] <==
	I1124 08:36:32.304663       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 08:36:32.336132       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1124 08:36:32.511316       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1124 08:36:32.512420       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 08:36:32.516376       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 08:36:33.098889       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 08:36:33.183215       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 08:36:33.228887       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 08:36:33.234901       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 08:36:34.550780       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 08:36:55.724450       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.253.33"}
	I1124 08:36:59.589787       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.66.109"}
	I1124 08:37:01.734542       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.154.105"}
	I1124 08:37:09.781665       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.25.98"}
	E1124 08:37:12.884554       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60938: use of closed network connection
	E1124 08:37:13.719611       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60956: use of closed network connection
	E1124 08:37:15.886968       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60980: use of closed network connection
	E1124 08:37:17.719880       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35936: use of closed network connection
	E1124 08:37:18.002292       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35972: use of closed network connection
	I1124 08:37:19.597217       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.65.129"}
	E1124 08:37:26.702123       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36074: use of closed network connection
	I1124 08:37:28.129834       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 08:37:28.240415       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.74.49"}
	I1124 08:37:28.251895       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.173.123"}
	I1124 08:46:31.333838       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [02a1f325e1d9ba0e63f41a8d815926ca558800fd712f4ffb96fe7eaa8e9e9f1c] <==
	I1124 08:35:58.205304       1 serving.go:386] Generated self-signed cert in-memory
	I1124 08:35:58.979665       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1124 08:35:58.979685       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:35:58.981045       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1124 08:35:58.981108       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1124 08:35:58.981241       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1124 08:35:58.981376       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 08:36:27.984848       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": net/http: TLS handshake timeout"
	
	
	==> kube-controller-manager [5cf34ba706739ae112e172799afc5ab4bca9cffa6f49da4af4d9ac9264143d07] <==
	I1124 08:36:34.346692       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 08:36:34.346756       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 08:36:34.346761       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 08:36:34.346766       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 08:36:34.346774       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 08:36:34.346803       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 08:36:34.346790       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 08:36:34.346839       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 08:36:34.346866       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 08:36:34.346889       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 08:36:34.346904       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-683533"
	I1124 08:36:34.346973       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 08:36:34.346995       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 08:36:34.347045       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 08:36:34.347619       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 08:36:34.347891       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 08:36:34.352419       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 08:36:34.366545       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 08:36:34.368054       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 08:37:28.179605       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:37:28.183065       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:37:28.185274       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:37:28.190202       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:37:28.191929       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:37:28.195636       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [26a2bc0addb00cb44370f81c49fc0126d48879035241fe536cac65ec8d4b14f0] <==
	E1124 08:35:57.835199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-683533&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 08:35:58.758278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-683533&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 08:36:01.128433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-683533&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 08:36:05.258945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-683533&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 08:36:27.951497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-683533&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1124 08:36:44.934667       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 08:36:44.934703       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 08:36:44.934772       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:36:44.953099       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 08:36:44.953150       1 server_linux.go:132] "Using iptables Proxier"
	I1124 08:36:44.958553       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:36:44.958927       1 server.go:527] "Version info" version="v1.34.2"
	I1124 08:36:44.958964       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:36:44.961560       1 config.go:200] "Starting service config controller"
	I1124 08:36:44.961579       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:36:44.961608       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:36:44.961615       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:36:44.961638       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:36:44.961649       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:36:44.961671       1 config.go:309] "Starting node config controller"
	I1124 08:36:44.961687       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:36:44.961695       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 08:36:45.061744       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 08:36:45.061756       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 08:36:45.061798       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [3156d167def7f043cb719e7fdff3d90c73dcf239c6e301415b550f9c506741eb] <==
	I1124 08:35:19.422563       1 server_linux.go:53] "Using iptables proxy"
	I1124 08:35:19.503394       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 08:35:19.604124       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 08:35:19.604156       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 08:35:19.604264       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:35:19.625516       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 08:35:19.625573       1 server_linux.go:132] "Using iptables Proxier"
	I1124 08:35:19.631636       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:35:19.632137       1 server.go:527] "Version info" version="v1.34.2"
	I1124 08:35:19.632172       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:35:19.634746       1 config.go:200] "Starting service config controller"
	I1124 08:35:19.634809       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:35:19.634869       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:35:19.634820       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:35:19.634762       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:35:19.634970       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:35:19.635044       1 config.go:309] "Starting node config controller"
	I1124 08:35:19.635050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:35:19.635057       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 08:35:19.735005       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 08:35:19.735049       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 08:35:19.735116       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [459c0e553cb898465f3385886680668e261d920b662f7eb1fa323ee3b063d374] <==
	E1124 08:35:10.947104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 08:35:10.947189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 08:35:10.947230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 08:35:10.947245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 08:35:10.947248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 08:35:11.754452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 08:35:11.775763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 08:35:11.779890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 08:35:11.814020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 08:35:11.826282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 08:35:11.896989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 08:35:11.902056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 08:35:11.992812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 08:35:12.062242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 08:35:12.066191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 08:35:12.083515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 08:35:12.109698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 08:35:12.231864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 08:35:14.845081       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 08:35:56.969439       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 08:35:56.969506       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 08:35:56.969547       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 08:35:56.969542       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 08:35:56.969661       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 08:35:56.969806       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f213cd538f0d32cd80a3de8525672a9df60efc964955b26c5a53d2b2c112bc3c] <==
	E1124 08:36:07.467129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 08:36:07.597426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 08:36:07.684915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 08:36:18.017935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 08:36:18.286490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 08:36:18.519624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 08:36:23.035313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 08:36:25.047210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 08:36:25.673870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 08:36:25.773411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 08:36:25.817970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 08:36:26.432248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 08:36:26.682516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 08:36:26.699892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 08:36:26.939857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 08:36:27.187125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 08:36:27.218608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 08:36:27.314124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 08:36:27.414645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 08:36:27.551075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 08:36:27.925850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 08:36:28.000384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 08:36:29.561775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39954->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 08:36:29.561849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39948->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1124 08:36:51.495474       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 08:44:41 functional-683533 kubelet[4194]: E1124 08:44:41.245405    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:44:47 functional-683533 kubelet[4194]: E1124 08:44:47.245285    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:44:52 functional-683533 kubelet[4194]: E1124 08:44:52.245737    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:45:02 functional-683533 kubelet[4194]: E1124 08:45:02.245202    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:45:05 functional-683533 kubelet[4194]: E1124 08:45:05.245683    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:45:14 functional-683533 kubelet[4194]: E1124 08:45:14.245448    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:45:20 functional-683533 kubelet[4194]: E1124 08:45:20.245530    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:45:27 functional-683533 kubelet[4194]: E1124 08:45:27.245970    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:45:34 functional-683533 kubelet[4194]: E1124 08:45:34.245407    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:45:40 functional-683533 kubelet[4194]: E1124 08:45:40.245715    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:45:45 functional-683533 kubelet[4194]: E1124 08:45:45.245454    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:45:54 functional-683533 kubelet[4194]: E1124 08:45:54.245370    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:45:57 functional-683533 kubelet[4194]: E1124 08:45:57.245493    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:46:05 functional-683533 kubelet[4194]: E1124 08:46:05.245271    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:46:08 functional-683533 kubelet[4194]: E1124 08:46:08.245113    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:46:19 functional-683533 kubelet[4194]: E1124 08:46:19.247327    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:46:19 functional-683533 kubelet[4194]: E1124 08:46:19.247411    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:46:30 functional-683533 kubelet[4194]: E1124 08:46:30.245194    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:46:34 functional-683533 kubelet[4194]: E1124 08:46:34.245592    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:46:44 functional-683533 kubelet[4194]: E1124 08:46:44.245634    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:46:49 functional-683533 kubelet[4194]: E1124 08:46:49.246645    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:46:56 functional-683533 kubelet[4194]: E1124 08:46:56.245701    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:47:01 functional-683533 kubelet[4194]: E1124 08:47:01.244955    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	Nov 24 08:47:07 functional-683533 kubelet[4194]: E1124 08:47:07.245776    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-g5zbn" podUID="c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f"
	Nov 24 08:47:12 functional-683533 kubelet[4194]: E1124 08:47:12.245607    4194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qfsn9" podUID="9c55ad1f-de3b-4818-91bb-442508e98c52"
	
	
	==> kubernetes-dashboard [8a97f709581a64976be461c5690c84dd340981489f8f077400246f4af2fa19bd] <==
	2025/11/24 08:37:31 Using namespace: kubernetes-dashboard
	2025/11/24 08:37:31 Using in-cluster config to connect to apiserver
	2025/11/24 08:37:31 Using secret token for csrf signing
	2025/11/24 08:37:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 08:37:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 08:37:31 Successful initial request to the apiserver, version: v1.34.2
	2025/11/24 08:37:31 Generating JWE encryption key
	2025/11/24 08:37:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 08:37:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 08:37:31 Initializing JWE encryption key from synchronized object
	2025/11/24 08:37:31 Creating in-cluster Sidecar client
	2025/11/24 08:37:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:37:31 Serving insecurely on HTTP port: 9090
	2025/11/24 08:38:01 Successful request to sidecar
	2025/11/24 08:37:31 Starting overwatch
	
	
	==> storage-provisioner [45a1fc97c037e8b704966842f2bee9c9b0571bbb09e03a491c182e70ce47b511] <==
	I1124 08:35:57.719374       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 08:35:57.722682       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [c00f049005125b590a25b43bebcbae300ffc6817bc41c452106d1b224ddcb651] <==
	W1124 08:46:56.248697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:46:58.251846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:46:58.255470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:00.258005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:00.262207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:02.265400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:02.269159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:04.272416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:04.276204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:06.279912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:06.283419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:08.286764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:08.290825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:10.294361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:10.299444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:12.302735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:12.306454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:14.309654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:14.314757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:16.318264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:16.322063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:18.325525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:18.330678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:20.333843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:47:20.338397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-683533 -n functional-683533
helpers_test.go:269: (dbg) Run:  kubectl --context functional-683533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-g5zbn hello-node-connect-7d85dfc575-qfsn9
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-683533 describe pod busybox-mount hello-node-75c85bcc94-g5zbn hello-node-connect-7d85dfc575-qfsn9
helpers_test.go:290: (dbg) kubectl --context functional-683533 describe pod busybox-mount hello-node-75c85bcc94-g5zbn hello-node-connect-7d85dfc575-qfsn9:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-683533/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 08:37:20 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://14b3e495c122115a181282add32943531ce108c0094b8c5c25a6e80e022672ef
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 08:37:22 +0000
	      Finished:     Mon, 24 Nov 2025 08:37:22 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wp4b8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wp4b8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-683533
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.345s (1.345s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-g5zbn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-683533/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 08:36:59 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbnz8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tbnz8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-g5zbn to functional-683533
	  Normal   Pulling    7m31s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m31s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m31s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    15s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     15s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-qfsn9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-683533/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 08:37:19 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jwxjm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jwxjm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qfsn9 to functional-683533
	  Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-683533 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-683533 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-g5zbn" [c61ac7b5-55fc-4a6c-9f2b-dcb852c61b6f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-683533 -n functional-683533
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-24 08:46:59.928488554 +0000 UTC m=+1105.989212486
functional_test.go:1460: (dbg) Run:  kubectl --context functional-683533 describe po hello-node-75c85bcc94-g5zbn -n default
functional_test.go:1460: (dbg) kubectl --context functional-683533 describe po hello-node-75c85bcc94-g5zbn -n default:
Name:             hello-node-75c85bcc94-g5zbn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-683533/192.168.49.2
Start Time:       Mon, 24 Nov 2025 08:36:59 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbnz8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tbnz8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-g5zbn to functional-683533
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-683533 logs hello-node-75c85bcc94-g5zbn -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-683533 logs hello-node-75c85bcc94-g5zbn -n default: exit status 1 (73.053256ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-g5zbn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-683533 logs hello-node-75c85bcc94-g5zbn -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image load --daemon kicbase/echo-server:functional-683533 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-683533" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image load --daemon kicbase/echo-server:functional-683533 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-683533" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-683533
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image load --daemon kicbase/echo-server:functional-683533 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-683533 image ls: (2.263364229s)
functional_test.go:461: expected "kicbase/echo-server:functional-683533" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image save kicbase/echo-server:functional-683533 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-683533 image save kicbase/echo-server:functional-683533 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.629478353s)
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1124 08:37:07.828015   44405 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:37:07.828305   44405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:07.828314   44405 out.go:374] Setting ErrFile to fd 2...
	I1124 08:37:07.828319   44405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:07.828532   44405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:37:07.829108   44405 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:37:07.829198   44405 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:37:07.829610   44405 cli_runner.go:164] Run: docker container inspect functional-683533 --format={{.State.Status}}
	I1124 08:37:07.849364   44405 ssh_runner.go:195] Run: systemctl --version
	I1124 08:37:07.849430   44405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-683533
	I1124 08:37:07.868475   44405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-683533/id_rsa Username:docker}
	I1124 08:37:07.968708   44405 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1124 08:37:07.968777   44405 cache_images.go:255] Failed to load cached images for "functional-683533": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1124 08:37:07.968819   44405 cache_images.go:267] failed pushing to: functional-683533

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-683533
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image save --daemon kicbase/echo-server:functional-683533 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-683533
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-683533: exit status 1 (18.213236ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-683533

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-683533

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 service --namespace=default --https --url hello-node: exit status 115 (540.515793ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31552
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-683533 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 service hello-node --url --format={{.IP}}: exit status 115 (541.769678ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-683533 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 service hello-node --url: exit status 115 (540.044702ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31552
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-683533 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31552
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-504554 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-504554 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-kkv2r" [d7bddf02-867f-4dc1-8b44-b32edbb40a7d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-504554 -n functional-504554
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-24 08:59:09.609757771 +0000 UTC m=+1835.670481695
functional_test.go:1645: (dbg) Run:  kubectl --context functional-504554 describe po hello-node-connect-9f67c86d4-kkv2r -n default
functional_test.go:1645: (dbg) kubectl --context functional-504554 describe po hello-node-connect-9f67c86d4-kkv2r -n default:
Name:             hello-node-connect-9f67c86d4-kkv2r
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-504554/192.168.49.2
Start Time:       Mon, 24 Nov 2025 08:49:09 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h54sc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-h54sc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-kkv2r to functional-504554
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-504554 logs hello-node-connect-9f67c86d4-kkv2r -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-504554 logs hello-node-connect-9f67c86d4-kkv2r -n default: exit status 1 (68.401955ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-kkv2r" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-504554 logs hello-node-connect-9f67c86d4-kkv2r -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-504554 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-kkv2r
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-504554/192.168.49.2
Start Time:       Mon, 24 Nov 2025 08:49:09 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h54sc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-h54sc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-kkv2r to functional-504554
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-504554 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-504554 logs -l app=hello-node-connect: exit status 1 (58.035106ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-kkv2r" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-504554 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-504554 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.50.50
IPs:                      10.109.50.50
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32697/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-504554
helpers_test.go:243: (dbg) docker inspect functional-504554:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2d2f9a3a0979d7803b2128aab8c6cc97fa34eb89658477af8c68db2c736ad20",
	        "Created": "2025-11-24T08:47:26.439598634Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56087,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T08:47:26.472566754Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/e2d2f9a3a0979d7803b2128aab8c6cc97fa34eb89658477af8c68db2c736ad20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2d2f9a3a0979d7803b2128aab8c6cc97fa34eb89658477af8c68db2c736ad20/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2d2f9a3a0979d7803b2128aab8c6cc97fa34eb89658477af8c68db2c736ad20/hosts",
	        "LogPath": "/var/lib/docker/containers/e2d2f9a3a0979d7803b2128aab8c6cc97fa34eb89658477af8c68db2c736ad20/e2d2f9a3a0979d7803b2128aab8c6cc97fa34eb89658477af8c68db2c736ad20-json.log",
	        "Name": "/functional-504554",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-504554:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-504554",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2d2f9a3a0979d7803b2128aab8c6cc97fa34eb89658477af8c68db2c736ad20",
	                "LowerDir": "/var/lib/docker/overlay2/46ba922094fa08e529cdbe1109edd1bb0ed0caaf956491d19e8eed0fbaaa02ae-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46ba922094fa08e529cdbe1109edd1bb0ed0caaf956491d19e8eed0fbaaa02ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46ba922094fa08e529cdbe1109edd1bb0ed0caaf956491d19e8eed0fbaaa02ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46ba922094fa08e529cdbe1109edd1bb0ed0caaf956491d19e8eed0fbaaa02ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-504554",
	                "Source": "/var/lib/docker/volumes/functional-504554/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-504554",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-504554",
	                "name.minikube.sigs.k8s.io": "functional-504554",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2f41720e1b34af3916565580df001f3f3035e861f5bf5ccd4455c07b65848ce5",
	            "SandboxKey": "/var/run/docker/netns/2f41720e1b34",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-504554": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "939d8d58e5ddf275902fe0990a7317f482f96770b6a75cfc565f11b35047083b",
	                    "EndpointID": "bc0fc26c02ae240bfbad68cc54282fdeebed050ceb1d387828efc0efff57ce85",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "06:ae:0a:79:29:7e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-504554",
	                        "e2d2f9a3a097"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-504554 -n functional-504554
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-504554 logs -n 25: (1.303173934s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-504554 ssh cat /etc/hostname                                                                                                                         │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ tunnel         │ functional-504554 tunnel --alsologtostderr                                                                                                                      │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │                     │
	│ tunnel         │ functional-504554 tunnel --alsologtostderr                                                                                                                      │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │                     │
	│ tunnel         │ functional-504554 tunnel --alsologtostderr                                                                                                                      │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │                     │
	│ image          │ functional-504554 image load --daemon kicbase/echo-server:functional-504554 --alsologtostderr                                                                   │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image ls                                                                                                                                      │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image load --daemon kicbase/echo-server:functional-504554 --alsologtostderr                                                                   │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image ls                                                                                                                                      │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image load --daemon kicbase/echo-server:functional-504554 --alsologtostderr                                                                   │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ update-context │ functional-504554 update-context --alsologtostderr -v=2                                                                                                         │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image ls                                                                                                                                      │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ update-context │ functional-504554 update-context --alsologtostderr -v=2                                                                                                         │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image save kicbase/echo-server:functional-504554 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ update-context │ functional-504554 update-context --alsologtostderr -v=2                                                                                                         │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image rm kicbase/echo-server:functional-504554 --alsologtostderr                                                                              │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image ls                                                                                                                                      │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image save --daemon kicbase/echo-server:functional-504554 --alsologtostderr                                                                   │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image ls --format yaml --alsologtostderr                                                                                                      │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image ls --format short --alsologtostderr                                                                                                     │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │                     │
	│ image          │ functional-504554 image ls --format json --alsologtostderr                                                                                                      │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ ssh            │ functional-504554 ssh pgrep buildkitd                                                                                                                           │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │                     │
	│ image          │ functional-504554 image ls --format table --alsologtostderr                                                                                                     │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image build -t localhost/my-image:functional-504554 testdata/build --alsologtostderr                                                          │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	│ image          │ functional-504554 image ls                                                                                                                                      │ functional-504554 │ jenkins │ v1.37.0 │ 24 Nov 25 08:49 UTC │ 24 Nov 25 08:49 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:49:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:49:09.663071   65756 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:49:09.663301   65756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:49:09.663309   65756 out.go:374] Setting ErrFile to fd 2...
	I1124 08:49:09.663313   65756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:49:09.663509   65756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:49:09.663913   65756 out.go:368] Setting JSON to false
	I1124 08:49:09.664794   65756 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1896,"bootTime":1763972254,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:49:09.664848   65756 start.go:143] virtualization: kvm guest
	I1124 08:49:09.666671   65756 out.go:179] * [functional-504554] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:49:09.668285   65756 notify.go:221] Checking for updates...
	I1124 08:49:09.668305   65756 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:49:09.669628   65756 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:49:09.670831   65756 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:49:09.672054   65756 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:49:09.673142   65756 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:49:09.674359   65756 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:49:09.675814   65756 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 08:49:09.676322   65756 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:49:09.699718   65756 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:49:09.699815   65756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:49:09.758045   65756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:49:09.747417121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:49:09.758174   65756 docker.go:319] overlay module found
	I1124 08:49:09.760759   65756 out.go:179] * Using the docker driver based on existing profile
	I1124 08:49:09.761996   65756 start.go:309] selected driver: docker
	I1124 08:49:09.762013   65756 start.go:927] validating driver "docker" against &{Name:functional-504554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-504554 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:49:09.762123   65756 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:49:09.762228   65756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:49:09.826676   65756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:49:09.816760593 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:49:09.827369   65756 cni.go:84] Creating CNI manager for ""
	I1124 08:49:09.827455   65756 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 08:49:09.827513   65756 start.go:353] cluster config:
	{Name:functional-504554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-504554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:49:09.831435   65756 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 24 08:49:44 functional-504554 crio[4890]: time="2025-11-24T08:49:44.352285953Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-504554" id=21e650a1-a714-40cb-adaa-9fad7eab6584 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:44 functional-504554 crio[4890]: time="2025-11-24T08:49:44.352455295Z" level=info msg="Image localhost/kicbase/echo-server:functional-504554 not found" id=21e650a1-a714-40cb-adaa-9fad7eab6584 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:44 functional-504554 crio[4890]: time="2025-11-24T08:49:44.352511698Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-504554 found" id=21e650a1-a714-40cb-adaa-9fad7eab6584 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:45 functional-504554 crio[4890]: time="2025-11-24T08:49:45.305892722Z" level=info msg="Checking image status: kicbase/echo-server:functional-504554" id=8b940e4a-38f5-49f2-9a4d-d04e922fe386 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:45 functional-504554 crio[4890]: time="2025-11-24T08:49:45.32990557Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-504554" id=eea6c636-e025-496e-ad77-56d4733eb695 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:45 functional-504554 crio[4890]: time="2025-11-24T08:49:45.330055069Z" level=info msg="Image docker.io/kicbase/echo-server:functional-504554 not found" id=eea6c636-e025-496e-ad77-56d4733eb695 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:45 functional-504554 crio[4890]: time="2025-11-24T08:49:45.330094557Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-504554 found" id=eea6c636-e025-496e-ad77-56d4733eb695 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:45 functional-504554 crio[4890]: time="2025-11-24T08:49:45.353189227Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-504554" id=066c3e90-405e-4f61-a514-fc2c0e641625 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:45 functional-504554 crio[4890]: time="2025-11-24T08:49:45.353307144Z" level=info msg="Image localhost/kicbase/echo-server:functional-504554 not found" id=066c3e90-405e-4f61-a514-fc2c0e641625 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:45 functional-504554 crio[4890]: time="2025-11-24T08:49:45.353394067Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-504554 found" id=066c3e90-405e-4f61-a514-fc2c0e641625 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:46 functional-504554 crio[4890]: time="2025-11-24T08:49:46.169271756Z" level=info msg="Checking image status: kicbase/echo-server:functional-504554" id=23899644-025a-44c7-af93-5e9617587721 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:46 functional-504554 crio[4890]: time="2025-11-24T08:49:46.193963741Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-504554" id=5c29ba63-2724-4353-9913-f8e54e0bf500 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:46 functional-504554 crio[4890]: time="2025-11-24T08:49:46.194094621Z" level=info msg="Image docker.io/kicbase/echo-server:functional-504554 not found" id=5c29ba63-2724-4353-9913-f8e54e0bf500 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:46 functional-504554 crio[4890]: time="2025-11-24T08:49:46.19412503Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-504554 found" id=5c29ba63-2724-4353-9913-f8e54e0bf500 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:46 functional-504554 crio[4890]: time="2025-11-24T08:49:46.218691506Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-504554" id=30fe52be-0cad-4c8e-bdb1-2c6741e2725e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:46 functional-504554 crio[4890]: time="2025-11-24T08:49:46.218810608Z" level=info msg="Image localhost/kicbase/echo-server:functional-504554 not found" id=30fe52be-0cad-4c8e-bdb1-2c6741e2725e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:46 functional-504554 crio[4890]: time="2025-11-24T08:49:46.218837925Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-504554 found" id=30fe52be-0cad-4c8e-bdb1-2c6741e2725e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 08:49:49 functional-504554 crio[4890]: time="2025-11-24T08:49:49.943031363Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c6f78d71-ad2b-4400-a866-5e5afec56d37 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:49:52 functional-504554 crio[4890]: time="2025-11-24T08:49:52.94306541Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c2895c33-48a9-4316-911f-79c7a462480f name=/runtime.v1.ImageService/PullImage
	Nov 24 08:50:29 functional-504554 crio[4890]: time="2025-11-24T08:50:29.943947227Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f40c2f53-37f0-43cf-b56b-3e3e839bb5d0 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:50:44 functional-504554 crio[4890]: time="2025-11-24T08:50:44.94342555Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=723e2f6a-ebd2-4921-9455-2fb4311d498e name=/runtime.v1.ImageService/PullImage
	Nov 24 08:52:01 functional-504554 crio[4890]: time="2025-11-24T08:52:01.945058504Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7efc3864-64b9-4490-bef8-9e7d7b591544 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:52:11 functional-504554 crio[4890]: time="2025-11-24T08:52:11.943367814Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=584613e5-0d89-419a-9cd0-01aab60994a8 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:54:46 functional-504554 crio[4890]: time="2025-11-24T08:54:46.943859973Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2c35c3b8-94f4-47d3-9088-70f9f1189727 name=/runtime.v1.ImageService/PullImage
	Nov 24 08:55:02 functional-504554 crio[4890]: time="2025-11-24T08:55:02.943517054Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2fe5bf64-270f-4a85-82a0-f2d0b28f7656 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d3064853f4be2       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   148b5335b999f       nginx-svc                                    default
	91f9c5d4516d9       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   caaedfe230b80       mysql-844cf969f6-n54ch                       default
	42e690987c311       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   9e9cab7c76f87       sp-pod                                       default
	66419bee44240       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   f488d3849b273       busybox-mount                                default
	39a7671ea92fc       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   9809636e8ad17       dashboard-metrics-scraper-5565989548-wwr46   kubernetes-dashboard
	151bc51a68c5a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   1f621ce70ad01       kubernetes-dashboard-b84665fb8-7f5sq         kubernetes-dashboard
	e46698de68e08       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                                 10 minutes ago      Running             kube-apiserver              0                   f578ff0e79fc3       kube-apiserver-functional-504554             kube-system
	74f46efb1cbdb       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 10 minutes ago      Running             kube-scheduler              1                   536acdd4c6754       kube-scheduler-functional-504554             kube-system
	285a65bec33aa       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 10 minutes ago      Running             etcd                        1                   437774bdd01ec       etcd-functional-504554                       kube-system
	432424c41264e       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 10 minutes ago      Running             kube-controller-manager     1                   ff74eefb3c2c3       kube-controller-manager-functional-504554    kube-system
	9fa919129d9f5       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 10 minutes ago      Running             kube-proxy                  1                   ded85f2b19a23       kube-proxy-k7b82                             kube-system
	796047b64449b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   a1b80b1c34c43       kindnet-mbbhl                                kube-system
	7ff730ba66895       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   f30c8a17b139e       storage-provisioner                          kube-system
	a9026338447dc       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 10 minutes ago      Running             coredns                     1                   990839af33f67       coredns-7d764666f9-hq66w                     kube-system
	8684b4a0c9fc7       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 11 minutes ago      Exited              coredns                     0                   990839af33f67       coredns-7d764666f9-hq66w                     kube-system
	5eb9a8d2e314e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   f30c8a17b139e       storage-provisioner                          kube-system
	c95dc92824ccd       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11               11 minutes ago      Exited              kindnet-cni                 0                   a1b80b1c34c43       kindnet-mbbhl                                kube-system
	8404dc8f8ac85       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 11 minutes ago      Exited              kube-proxy                  0                   ded85f2b19a23       kube-proxy-k7b82                             kube-system
	3e84c74f722b8       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 11 minutes ago      Exited              kube-scheduler              0                   536acdd4c6754       kube-scheduler-functional-504554             kube-system
	30c6548fff11c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 11 minutes ago      Exited              etcd                        0                   437774bdd01ec       etcd-functional-504554                       kube-system
	70a0a3a8770a2       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 11 minutes ago      Exited              kube-controller-manager     0                   ff74eefb3c2c3       kube-controller-manager-functional-504554    kube-system
	
	
	==> coredns [8684b4a0c9fc7d15e7974acff8c8f98cfd116c3d2d4c4b1cee6c8a7e56520b44] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46976 - 62009 "HINFO IN 4798014041049405544.6467560017421482140. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017918871s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9026338447dcd210631b16d2a60dd76bd166e5039c8965cae03c6bfc1387d9b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58620 - 49714 "HINFO IN 792595444974097974.5223531201337138457. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.016711837s
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               functional-504554
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-504554
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=functional-504554
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T08_47_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 08:47:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-504554
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 08:59:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 08:57:46 +0000   Mon, 24 Nov 2025 08:47:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 08:57:46 +0000   Mon, 24 Nov 2025 08:47:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 08:57:46 +0000   Mon, 24 Nov 2025 08:47:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 08:57:46 +0000   Mon, 24 Nov 2025 08:48:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-504554
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                096984b2-4829-418f-bb74-f16e24cc5bbc
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-x7ssn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-kkv2r            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-n54ch                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m43s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  kube-system                 coredns-7d764666f9-hq66w                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-504554                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-mbbhl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-504554              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-504554     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-k7b82                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-504554              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-wwr46    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-7f5sq          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  11m   node-controller  Node functional-504554 event: Registered Node functional-504554 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-504554 event: Registered Node functional-504554 in Controller
	
	
	==> dmesg <==
	[  +0.081417] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024229] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.472063] kauditd_printk_skb: 47 callbacks suppressed
	[Nov24 08:31] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.027365] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.023898] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.024840] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.022897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +4.031610] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +8.191119] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[ +16.382253] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 08:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	
	
	==> etcd [285a65bec33aa1de4ad9a2b66bdb3771daadeeb8b014c67e353a9d1aeb2b2681] <==
	{"level":"warn","ts":"2025-11-24T08:48:45.035602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.042540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.048823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.055370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.061701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.068262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.078499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.085453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.092148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.098092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.105016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.111100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.117973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.124194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.130358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.136581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.142680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.155678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.161578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.167947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.174033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:48:45.223851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36766","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T08:58:44.756631Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1135}
	{"level":"info","ts":"2025-11-24T08:58:44.775103Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1135,"took":"18.142165ms","hash":142290983,"current-db-size-bytes":3444736,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-24T08:58:44.775143Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":142290983,"revision":1135,"compact-revision":-1}
	
	
	==> etcd [30c6548fff11c9b4c3e4874a606375a5107152e9f5c35fc7c4a52cc4af0ae066] <==
	{"level":"info","ts":"2025-11-24T08:47:45.506580Z","caller":"traceutil/trace.go:172","msg":"trace[6635368] transaction","detail":"{read_only:false; response_revision:19; number_of_response:1; }","duration":"208.805922ms","start":"2025-11-24T08:47:45.297763Z","end":"2025-11-24T08:47:45.506569Z","steps":["trace[6635368] 'process raft request'  (duration: 208.630727ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:47:45.506631Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.978081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-y5akpziyisu4rnm7d6jczkcq6i\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T08:47:45.506667Z","caller":"traceutil/trace.go:172","msg":"trace[1738501330] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-y5akpziyisu4rnm7d6jczkcq6i; range_end:; response_count:0; response_revision:20; }","duration":"113.011487ms","start":"2025-11-24T08:47:45.393639Z","end":"2025-11-24T08:47:45.506650Z","steps":["trace[1738501330] 'agreement among raft nodes before linearized reading'  (duration: 112.955586ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:47:45.506721Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.255073ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T08:47:45.506749Z","caller":"traceutil/trace.go:172","msg":"trace[1787773309] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:20; }","duration":"119.288252ms","start":"2025-11-24T08:47:45.387454Z","end":"2025-11-24T08:47:45.506743Z","steps":["trace[1787773309] 'agreement among raft nodes before linearized reading'  (duration: 119.227877ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:47:45.506800Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.185962ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-11-24T08:47:45.506834Z","caller":"traceutil/trace.go:172","msg":"trace[1576497882] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:20; }","duration":"147.215745ms","start":"2025-11-24T08:47:45.359605Z","end":"2025-11-24T08:47:45.506821Z","steps":["trace[1576497882] 'agreement among raft nodes before linearized reading'  (duration: 147.125782ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:48:42.710111Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T08:48:42.710185Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-504554","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T08:48:42.710299Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T08:48:42.711751Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T08:48:42.711809Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T08:48:42.711826Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-24T08:48:42.711853Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T08:48:42.711872Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-24T08:48:42.711911Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T08:48:42.711903Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T08:48:42.711934Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T08:48:42.711941Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T08:48:42.711953Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-24T08:48:42.711941Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T08:48:42.713590Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T08:48:42.713658Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T08:48:42.713687Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T08:48:42.713696Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-504554","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 08:59:11 up 41 min,  0 user,  load average: 0.19, 0.17, 0.22
	Linux functional-504554 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [796047b64449b682c5c1b3ef9f36b4748b214e7996b881754ed2028bf90a28cd] <==
	I1124 08:57:03.308537       1 main.go:301] handling current node
	I1124 08:57:13.311601       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:57:13.311631       1 main.go:301] handling current node
	I1124 08:57:23.308316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:57:23.308370       1 main.go:301] handling current node
	I1124 08:57:33.313465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:57:33.313506       1 main.go:301] handling current node
	I1124 08:57:43.309053       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:57:43.309089       1 main.go:301] handling current node
	I1124 08:57:53.308082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:57:53.308142       1 main.go:301] handling current node
	I1124 08:58:03.315265       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:58:03.315299       1 main.go:301] handling current node
	I1124 08:58:13.311113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:58:13.311145       1 main.go:301] handling current node
	I1124 08:58:23.307894       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:58:23.307927       1 main.go:301] handling current node
	I1124 08:58:33.315284       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:58:33.315323       1 main.go:301] handling current node
	I1124 08:58:43.311893       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:58:43.311922       1 main.go:301] handling current node
	I1124 08:58:53.308130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:58:53.308164       1 main.go:301] handling current node
	I1124 08:59:03.314432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:59:03.314465       1 main.go:301] handling current node
	
	
	==> kindnet [c95dc92824ccd2976edc2c819cc96082645ba7115da28e12eab3a8af9da6a00d] <==
	I1124 08:47:54.831145       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 08:47:54.831446       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 08:47:54.831588       1 main.go:148] setting mtu 1500 for CNI 
	I1124 08:47:54.831604       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 08:47:54.831626       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T08:47:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 08:47:55.034451       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 08:47:55.034552       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 08:47:55.034566       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 08:47:55.035526       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 08:47:55.429322       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 08:47:55.429373       1 metrics.go:72] Registering metrics
	I1124 08:47:55.429440       1 controller.go:711] "Syncing nftables rules"
	I1124 08:48:05.035772       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:48:05.035858       1 main.go:301] handling current node
	I1124 08:48:15.040380       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:48:15.040418       1 main.go:301] handling current node
	I1124 08:48:25.040447       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 08:48:25.040475       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e46698de68e08ca878ffdf91c208cc54d2c6eb4d3f2659a38297338879b2fa5f] <==
	I1124 08:48:45.702157       1 policy_source.go:248] refreshing policies
	I1124 08:48:45.707716       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 08:48:46.001108       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 08:48:46.567482       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1124 08:48:46.774265       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1124 08:48:46.775470       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 08:48:46.780523       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 08:48:47.295202       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 08:48:47.382973       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 08:48:47.427265       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 08:48:47.432461       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 08:48:49.236276       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 08:49:04.720643       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.8.132"}
	I1124 08:49:09.278713       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.50.50"}
	I1124 08:49:10.631389       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.39.61"}
	I1124 08:49:10.729892       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 08:49:10.832961       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.56.143"}
	I1124 08:49:10.851239       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.250.226"}
	E1124 08:49:25.538909       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39650: use of closed network connection
	I1124 08:49:28.113699       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.110.247.215"}
	I1124 08:49:36.184718       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.129.255"}
	E1124 08:49:39.289143       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32886: use of closed network connection
	E1124 08:49:40.449190       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32910: use of closed network connection
	E1124 08:49:42.508273       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32934: use of closed network connection
	I1124 08:58:45.604260       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [432424c41264ee52a921faf9f74ebc9f7057db87833abb7691f28654ad8e7d34] <==
	I1124 08:48:48.792038       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.792064       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.791676       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.792240       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1124 08:48:48.792382       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.792064       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.792488       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.792528       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-504554"
	I1124 08:48:48.792598       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.792650       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.792611       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.792635       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.792589       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1124 08:48:48.793405       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:48:48.797762       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.892700       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:48.892720       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 08:48:48.892726       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1124 08:48:48.893826       1 shared_informer.go:377] "Caches are synced"
	E1124 08:49:10.773559       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:49:10.777173       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:49:10.778735       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:49:10.781735       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:49:10.785995       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:49:10.791638       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [70a0a3a8770a28c187d8ecd93c2c2c912dec72a44bbb5a9dbe1686c99b806c55] <==
	I1124 08:47:51.939908       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-504554"
	I1124 08:47:51.939960       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1124 08:47:51.939999       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.940226       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.941652       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.941792       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.942184       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.942223       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.942273       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.942196       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.943471       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.943611       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.943618       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.943657       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.943729       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.944036       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.944644       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.946649       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:51.949036       1 range_allocator.go:433] "Set node PodCIDR" node="functional-504554" podCIDRs=["10.244.0.0/24"]
	I1124 08:47:51.953364       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:47:52.041524       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:52.041542       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 08:47:52.041547       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1124 08:47:52.054551       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:06.942144       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [8404dc8f8ac8552dc2f74e67fdfe515c5a7a8b2f3ae6dbc843794e988f7bfea5] <==
	I1124 08:47:53.547638       1 server_linux.go:53] "Using iptables proxy"
	I1124 08:47:53.618087       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:47:53.718767       1 shared_informer.go:377] "Caches are synced"
	I1124 08:47:53.718815       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 08:47:53.718973       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:47:53.747305       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 08:47:53.747426       1 server_linux.go:136] "Using iptables Proxier"
	I1124 08:47:53.754651       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:47:53.755085       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 08:47:53.755120       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:47:53.757288       1 config.go:200] "Starting service config controller"
	I1124 08:47:53.757309       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:47:53.757672       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:47:53.757977       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:47:53.758004       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:47:53.757994       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:47:53.758915       1 config.go:309] "Starting node config controller"
	I1124 08:47:53.758934       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:47:53.857446       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 08:47:53.858754       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 08:47:53.858864       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 08:47:53.859134       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [9fa919129d9f56fe2ba078e01e7f6c652022070986d2946215c535ccd13840dc] <==
	I1124 08:48:32.978770       1 server_linux.go:53] "Using iptables proxy"
	I1124 08:48:33.046118       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:48:51.446684       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:51.446726       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 08:48:51.446823       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:48:51.465502       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 08:48:51.465562       1 server_linux.go:136] "Using iptables Proxier"
	I1124 08:48:51.470938       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:48:51.471291       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 08:48:51.471313       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:48:51.472657       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:48:51.472667       1 config.go:200] "Starting service config controller"
	I1124 08:48:51.472684       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:48:51.472701       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:48:51.472704       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:48:51.472727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:48:51.472754       1 config.go:309] "Starting node config controller"
	I1124 08:48:51.472762       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:48:51.573230       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 08:48:51.573258       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 08:48:51.573278       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 08:48:51.573284       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [3e84c74f722b87ad0d0e937ab0279976f9439030b3c031eca2579056b27afd4d] <==
	E1124 08:47:46.005657       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1124 08:47:46.006657       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1124 08:47:46.028039       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1124 08:47:46.029056       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1124 08:47:46.033103       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1124 08:47:46.033950       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1124 08:47:46.075728       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1124 08:47:46.076760       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1124 08:47:46.179796       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1124 08:47:46.180786       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1124 08:47:46.213379       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1124 08:47:46.214230       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1124 08:47:46.226453       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1124 08:47:46.227739       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1124 08:47:46.235676       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1124 08:47:46.236495       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1124 08:47:46.348055       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1124 08:47:46.349023       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I1124 08:47:48.851899       1 shared_informer.go:377] "Caches are synced"
	I1124 08:48:42.601511       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 08:48:42.601555       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 08:48:42.601630       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 08:48:42.601651       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 08:48:42.601674       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 08:48:42.601696       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [74f46efb1cbdb6bd40557032d38b78b5445c13f264031e64e818f2e1e8cee3b5] <==
	I1124 08:48:44.549881       1 serving.go:386] Generated self-signed cert in-memory
	W1124 08:48:45.599830       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 08:48:45.599864       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 08:48:45.599876       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 08:48:45.599885       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 08:48:45.616997       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1124 08:48:45.617020       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:48:45.618859       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 08:48:45.618888       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:48:45.619019       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 08:48:45.619061       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 08:48:45.719439       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 08:57:28 functional-504554 kubelet[5404]: E1124 08:57:28.943251    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-kkv2r" podUID="d7bddf02-867f-4dc1-8b44-b32edbb40a7d"
	Nov 24 08:57:36 functional-504554 kubelet[5404]: E1124 08:57:36.942026    5404 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-504554" containerName="kube-controller-manager"
	Nov 24 08:57:37 functional-504554 kubelet[5404]: E1124 08:57:37.943621    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-x7ssn" podUID="820531a2-979d-4986-9dea-a427316a8bdc"
	Nov 24 08:57:43 functional-504554 kubelet[5404]: E1124 08:57:43.943281    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-kkv2r" podUID="d7bddf02-867f-4dc1-8b44-b32edbb40a7d"
	Nov 24 08:57:46 functional-504554 kubelet[5404]: E1124 08:57:46.942314    5404 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-504554" containerName="kube-scheduler"
	Nov 24 08:57:52 functional-504554 kubelet[5404]: E1124 08:57:52.943288    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-x7ssn" podUID="820531a2-979d-4986-9dea-a427316a8bdc"
	Nov 24 08:57:54 functional-504554 kubelet[5404]: E1124 08:57:54.943041    5404 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-7f5sq" containerName="kubernetes-dashboard"
	Nov 24 08:57:56 functional-504554 kubelet[5404]: E1124 08:57:56.943382    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-kkv2r" podUID="d7bddf02-867f-4dc1-8b44-b32edbb40a7d"
	Nov 24 08:57:57 functional-504554 kubelet[5404]: E1124 08:57:57.943198    5404 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-hq66w" containerName="coredns"
	Nov 24 08:58:03 functional-504554 kubelet[5404]: E1124 08:58:03.942842    5404 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-504554" containerName="kube-apiserver"
	Nov 24 08:58:07 functional-504554 kubelet[5404]: E1124 08:58:07.942642    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-x7ssn" podUID="820531a2-979d-4986-9dea-a427316a8bdc"
	Nov 24 08:58:11 functional-504554 kubelet[5404]: E1124 08:58:11.943159    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-kkv2r" podUID="d7bddf02-867f-4dc1-8b44-b32edbb40a7d"
	Nov 24 08:58:13 functional-504554 kubelet[5404]: E1124 08:58:13.944138    5404 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-504554" containerName="etcd"
	Nov 24 08:58:19 functional-504554 kubelet[5404]: E1124 08:58:19.943104    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-x7ssn" podUID="820531a2-979d-4986-9dea-a427316a8bdc"
	Nov 24 08:58:22 functional-504554 kubelet[5404]: E1124 08:58:22.942891    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-kkv2r" podUID="d7bddf02-867f-4dc1-8b44-b32edbb40a7d"
	Nov 24 08:58:28 functional-504554 kubelet[5404]: E1124 08:58:28.942655    5404 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-wwr46" containerName="dashboard-metrics-scraper"
	Nov 24 08:58:32 functional-504554 kubelet[5404]: E1124 08:58:32.942792    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-x7ssn" podUID="820531a2-979d-4986-9dea-a427316a8bdc"
	Nov 24 08:58:36 functional-504554 kubelet[5404]: E1124 08:58:36.943180    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-kkv2r" podUID="d7bddf02-867f-4dc1-8b44-b32edbb40a7d"
	Nov 24 08:58:46 functional-504554 kubelet[5404]: E1124 08:58:46.942978    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-x7ssn" podUID="820531a2-979d-4986-9dea-a427316a8bdc"
	Nov 24 08:58:51 functional-504554 kubelet[5404]: E1124 08:58:51.943110    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-kkv2r" podUID="d7bddf02-867f-4dc1-8b44-b32edbb40a7d"
	Nov 24 08:58:57 functional-504554 kubelet[5404]: E1124 08:58:57.942837    5404 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-504554" containerName="kube-scheduler"
	Nov 24 08:59:01 functional-504554 kubelet[5404]: E1124 08:59:01.944544    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-x7ssn" podUID="820531a2-979d-4986-9dea-a427316a8bdc"
	Nov 24 08:59:05 functional-504554 kubelet[5404]: E1124 08:59:05.942705    5404 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-kkv2r" podUID="d7bddf02-867f-4dc1-8b44-b32edbb40a7d"
	Nov 24 08:59:06 functional-504554 kubelet[5404]: E1124 08:59:06.942794    5404 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-504554" containerName="kube-controller-manager"
	Nov 24 08:59:07 functional-504554 kubelet[5404]: E1124 08:59:07.942847    5404 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-7f5sq" containerName="kubernetes-dashboard"
	
	
	==> kubernetes-dashboard [151bc51a68c5a871d7db8df09979b5f4f797d5c8c37b8ec436be67da7a75c11e] <==
	2025/11/24 08:49:13 Starting overwatch
	2025/11/24 08:49:13 Using namespace: kubernetes-dashboard
	2025/11/24 08:49:13 Using in-cluster config to connect to apiserver
	2025/11/24 08:49:13 Using secret token for csrf signing
	2025/11/24 08:49:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 08:49:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 08:49:13 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/11/24 08:49:13 Generating JWE encryption key
	2025/11/24 08:49:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 08:49:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 08:49:13 Initializing JWE encryption key from synchronized object
	2025/11/24 08:49:13 Creating in-cluster Sidecar client
	2025/11/24 08:49:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:49:13 Serving insecurely on HTTP port: 9090
	2025/11/24 08:49:43 Successful request to sidecar
	
	
	==> storage-provisioner [5eb9a8d2e314ee6747093821f2da0ff5b350d9c2c7d065339896870a531f61ad] <==
	I1124 08:48:06.117001       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-504554_85224c22-c051-4e9b-baab-03c01a54125e!
	W1124 08:48:08.024522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:08.029423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:10.032531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:10.036500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:12.039537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:12.044420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:14.047321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:14.051500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:16.055152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:16.060032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:18.063188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:18.067439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:20.070452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:20.075107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:22.077986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:22.082189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:24.085520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:24.089902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:26.093579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:26.097420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:28.100273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:28.105148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:30.107835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:48:30.111381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7ff730ba668954d9bfeffbe6a19cc8d853221ff273067d3efbc2ddb75aa8f185] <==
	W1124 08:58:45.571265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:47.574232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:47.578216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:49.581176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:49.584874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:51.587796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:51.591219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:53.594415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:53.599521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:55.602261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:55.606191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:57.609182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:57.614136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:59.617040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:58:59.620757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:59:01.624436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:59:01.628747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:59:03.632545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:59:03.637721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:59:05.640602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:59:05.644096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:59:07.648805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:59:07.654306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:59:09.657389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:59:09.661286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-504554 -n functional-504554
helpers_test.go:269: (dbg) Run:  kubectl --context functional-504554 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-x7ssn hello-node-connect-9f67c86d4-kkv2r
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-504554 describe pod busybox-mount hello-node-5758569b79-x7ssn hello-node-connect-9f67c86d4-kkv2r
helpers_test.go:290: (dbg) kubectl --context functional-504554 describe pod busybox-mount hello-node-5758569b79-x7ssn hello-node-connect-9f67c86d4-kkv2r:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-504554/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 08:49:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://66419bee44240e099943a46dde7500ebe1bdd9e8d9233eaa0f54a67a0451afba
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 08:49:19 +0000
	      Finished:     Mon, 24 Nov 2025 08:49:19 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kwlk5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kwlk5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m54s  default-scheduler  Successfully assigned default/busybox-mount to functional-504554
	  Normal  Pulling    9m54s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m53s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.134s (1.162s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m53s  kubelet            Container created
	  Normal  Started    9m53s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-x7ssn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-504554/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 08:49:10 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c877h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c877h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-5758569b79-x7ssn to functional-504554
	  Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-kkv2r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-504554/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 08:49:09 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h54sc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-h54sc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-kkv2r to functional-504554
	  Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m51s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-504554 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-504554 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-x7ssn" [820531a2-979d-4986-9dea-a427316a8bdc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
2025/11/24 08:49:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-504554 -n functional-504554
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-24 08:59:10.968532639 +0000 UTC m=+1837.029256564
functional_test.go:1460: (dbg) Run:  kubectl --context functional-504554 describe po hello-node-5758569b79-x7ssn -n default
functional_test.go:1460: (dbg) kubectl --context functional-504554 describe po hello-node-5758569b79-x7ssn -n default:
Name:             hello-node-5758569b79-x7ssn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-504554/192.168.49.2
Start Time:       Mon, 24 Nov 2025 08:49:10 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c877h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c877h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-5758569b79-x7ssn to functional-504554
Normal   Pulling    7m (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-504554 logs hello-node-5758569b79-x7ssn -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-504554 logs hello-node-5758569b79-x7ssn -n default: exit status 1 (66.489895ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-x7ssn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-504554 logs hello-node-5758569b79-x7ssn -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image load --daemon kicbase/echo-server:functional-504554 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-504554" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image load --daemon kicbase/echo-server:functional-504554 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-504554" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-504554
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image load --daemon kicbase/echo-server:functional-504554 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-504554" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image save kicbase/echo-server:functional-504554 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1124 08:49:46.520226   73734 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:49:46.520320   73734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:49:46.520328   73734 out.go:374] Setting ErrFile to fd 2...
	I1124 08:49:46.520347   73734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:49:46.520549   73734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:49:46.521112   73734 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 08:49:46.521206   73734 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 08:49:46.521589   73734 cli_runner.go:164] Run: docker container inspect functional-504554 --format={{.State.Status}}
	I1124 08:49:46.540720   73734 ssh_runner.go:195] Run: systemctl --version
	I1124 08:49:46.540774   73734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-504554
	I1124 08:49:46.558157   73734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-504554/id_rsa Username:docker}
	I1124 08:49:46.658073   73734 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1124 08:49:46.658140   73734 cache_images.go:255] Failed to load cached images for "functional-504554": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1124 08:49:46.658170   73734 cache_images.go:267] failed pushing to: functional-504554

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-504554
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image save --daemon kicbase/echo-server:functional-504554 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-504554
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-504554: exit status 1 (16.761751ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-504554

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-504554

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 service --namespace=default --https --url hello-node: exit status 115 (531.467082ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30123
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-504554 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 service hello-node --url --format={{.IP}}: exit status 115 (532.275937ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-504554 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 service hello-node --url: exit status 115 (532.70317ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30123
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-504554 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30123
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.34s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-142305 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-142305 --output=json --user=testUser: exit status 80 (2.33642169s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dba2bac3-ccbf-4df0-9a07-48adb8bfa25a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-142305 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7c12d230-dee5-4fbc-ac28-187903bfd83a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T09:08:44Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"192e29e5-e1f1-44ad-9f5f-a2c1f8ed4394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-142305 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.34s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-142305 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-142305 --output=json --user=testUser: exit status 80 (1.59970487s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"907476f3-0529-470e-a761-d1d45945066f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-142305 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"07f3420d-659b-43b9-84c1-dea1da73dab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T09:08:46Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"35f12592-f592-4ea4-be36-5736b699b5c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-142305 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.60s)

                                                
                                    
x
+
TestPause/serial/Pause (5.6s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-374067 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-374067 --alsologtostderr -v=5: exit status 80 (1.935466587s)

                                                
                                                
-- stdout --
	* Pausing node pause-374067 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:22:43.286123  221164 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:22:43.286439  221164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:22:43.286448  221164 out.go:374] Setting ErrFile to fd 2...
	I1124 09:22:43.286455  221164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:22:43.286732  221164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:22:43.287021  221164 out.go:368] Setting JSON to false
	I1124 09:22:43.287038  221164 mustload.go:66] Loading cluster: pause-374067
	I1124 09:22:43.287589  221164 config.go:182] Loaded profile config "pause-374067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:22:43.288126  221164 cli_runner.go:164] Run: docker container inspect pause-374067 --format={{.State.Status}}
	I1124 09:22:43.323685  221164 host.go:66] Checking if "pause-374067" exists ...
	I1124 09:22:43.324096  221164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:22:43.438572  221164 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-24 09:22:43.419357712 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:22:43.439616  221164 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-374067 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 09:22:43.441274  221164 out.go:179] * Pausing node pause-374067 ... 
	I1124 09:22:43.443146  221164 host.go:66] Checking if "pause-374067" exists ...
	I1124 09:22:43.443725  221164 ssh_runner.go:195] Run: systemctl --version
	I1124 09:22:43.443901  221164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-374067
	I1124 09:22:43.473513  221164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/pause-374067/id_rsa Username:docker}
	I1124 09:22:43.588734  221164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:22:43.607595  221164 pause.go:52] kubelet running: true
	I1124 09:22:43.607725  221164 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:22:43.796816  221164 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:22:43.796945  221164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:22:43.889820  221164 cri.go:89] found id: "aa29ee9f85cebf74baa27cf3e80679170de62efb51dbc4a90fcccc2d42304736"
	I1124 09:22:43.889894  221164 cri.go:89] found id: "18568b03fc3a07fc46ebb0f16b57658c50f8fa5f1791cee2b802d57da67d4304"
	I1124 09:22:43.889914  221164 cri.go:89] found id: "563770d0bdcaca8df60133fb589e3f8410e97a0a80741909cf32147018fb90a7"
	I1124 09:22:43.889924  221164 cri.go:89] found id: "b02924f1926e4c0fc93eb2c175e5bfccd26af5cf344df926d0776e32bf3d38a6"
	I1124 09:22:43.889929  221164 cri.go:89] found id: "b769a8cbb7f808fcea81adf919a6a8b785de217bddb177e810fe547c4123bcfa"
	I1124 09:22:43.889934  221164 cri.go:89] found id: "8914a013c234229f7106c5d4ac049cca62019009b8a979764da4eaca6c996dd1"
	I1124 09:22:43.889957  221164 cri.go:89] found id: "52fc240f0457397d561acf64bb27037c43f32a53bfb4416c6ad15817de9b3d61"
	I1124 09:22:43.889963  221164 cri.go:89] found id: ""
	I1124 09:22:43.890017  221164 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:22:43.905244  221164 retry.go:31] will retry after 285.482077ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:22:43Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:22:44.191749  221164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:22:44.204209  221164 pause.go:52] kubelet running: false
	I1124 09:22:44.204268  221164 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:22:44.324089  221164 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:22:44.324169  221164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:22:44.400495  221164 cri.go:89] found id: "aa29ee9f85cebf74baa27cf3e80679170de62efb51dbc4a90fcccc2d42304736"
	I1124 09:22:44.400515  221164 cri.go:89] found id: "18568b03fc3a07fc46ebb0f16b57658c50f8fa5f1791cee2b802d57da67d4304"
	I1124 09:22:44.400519  221164 cri.go:89] found id: "563770d0bdcaca8df60133fb589e3f8410e97a0a80741909cf32147018fb90a7"
	I1124 09:22:44.400522  221164 cri.go:89] found id: "b02924f1926e4c0fc93eb2c175e5bfccd26af5cf344df926d0776e32bf3d38a6"
	I1124 09:22:44.400525  221164 cri.go:89] found id: "b769a8cbb7f808fcea81adf919a6a8b785de217bddb177e810fe547c4123bcfa"
	I1124 09:22:44.400528  221164 cri.go:89] found id: "8914a013c234229f7106c5d4ac049cca62019009b8a979764da4eaca6c996dd1"
	I1124 09:22:44.400531  221164 cri.go:89] found id: "52fc240f0457397d561acf64bb27037c43f32a53bfb4416c6ad15817de9b3d61"
	I1124 09:22:44.400534  221164 cri.go:89] found id: ""
	I1124 09:22:44.400575  221164 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:22:44.412767  221164 retry.go:31] will retry after 516.78646ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:22:44Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:22:44.930543  221164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:22:44.943701  221164 pause.go:52] kubelet running: false
	I1124 09:22:44.943755  221164 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:22:45.052556  221164 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:22:45.052640  221164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:22:45.114928  221164 cri.go:89] found id: "aa29ee9f85cebf74baa27cf3e80679170de62efb51dbc4a90fcccc2d42304736"
	I1124 09:22:45.114955  221164 cri.go:89] found id: "18568b03fc3a07fc46ebb0f16b57658c50f8fa5f1791cee2b802d57da67d4304"
	I1124 09:22:45.114963  221164 cri.go:89] found id: "563770d0bdcaca8df60133fb589e3f8410e97a0a80741909cf32147018fb90a7"
	I1124 09:22:45.114968  221164 cri.go:89] found id: "b02924f1926e4c0fc93eb2c175e5bfccd26af5cf344df926d0776e32bf3d38a6"
	I1124 09:22:45.114973  221164 cri.go:89] found id: "b769a8cbb7f808fcea81adf919a6a8b785de217bddb177e810fe547c4123bcfa"
	I1124 09:22:45.114978  221164 cri.go:89] found id: "8914a013c234229f7106c5d4ac049cca62019009b8a979764da4eaca6c996dd1"
	I1124 09:22:45.114982  221164 cri.go:89] found id: "52fc240f0457397d561acf64bb27037c43f32a53bfb4416c6ad15817de9b3d61"
	I1124 09:22:45.114987  221164 cri.go:89] found id: ""
	I1124 09:22:45.115029  221164 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:22:45.128308  221164 out.go:203] 
	W1124 09:22:45.129535  221164 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:22:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:22:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:22:45.129554  221164 out.go:285] * 
	* 
	W1124 09:22:45.133261  221164 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:22:45.134426  221164 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-374067 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-374067
helpers_test.go:243: (dbg) docker inspect pause-374067:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834",
	        "Created": "2025-11-24T09:21:53.706624648Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 206702,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:21:53.823970065Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834/hosts",
	        "LogPath": "/var/lib/docker/containers/fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834/fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834-json.log",
	        "Name": "/pause-374067",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-374067:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-374067",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834",
	                "LowerDir": "/var/lib/docker/overlay2/25f40940efa5bea63498d739cfe4dd66a21c1b2ea3a608f67499dd8613582d95-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25f40940efa5bea63498d739cfe4dd66a21c1b2ea3a608f67499dd8613582d95/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25f40940efa5bea63498d739cfe4dd66a21c1b2ea3a608f67499dd8613582d95/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25f40940efa5bea63498d739cfe4dd66a21c1b2ea3a608f67499dd8613582d95/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-374067",
	                "Source": "/var/lib/docker/volumes/pause-374067/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-374067",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-374067",
	                "name.minikube.sigs.k8s.io": "pause-374067",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "88d14c65728ce8eae49ed261117d140fb9387bffd2cc09af11aece1c5d0e0a2e",
	            "SandboxKey": "/var/run/docker/netns/88d14c65728c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-374067": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "207fa621dc590e92b376cf58bb9028748fb46ca63b98e9831ad9de96dd8dddac",
	                    "EndpointID": "cc3a21a15e49c4940aa0327dace069cf31947101cdd52f6b2817b025bd91a628",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b2:9c:5a:5e:cb:9f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-374067",
	                        "fc012686e50b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-374067 -n pause-374067
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-374067 -n pause-374067: exit status 2 (312.792011ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-374067 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-310817 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --cancel-scheduled                                                                                 │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │ 24 Nov 25 09:20 UTC │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │ 24 Nov 25 09:21 UTC │
	│ delete  │ -p scheduled-stop-310817                                                                                                    │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ start   │ -p insufficient-storage-761969 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-761969 │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │                     │
	│ delete  │ -p insufficient-storage-761969                                                                                              │ insufficient-storage-761969 │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ start   │ -p force-systemd-env-401542 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-401542    │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p pause-374067 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-374067                │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p offline-crio-330284 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-330284         │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p stopped-upgrade-385309 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-385309      │ jenkins │ v1.32.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:22 UTC │
	│ delete  │ -p force-systemd-env-401542                                                                                                 │ force-systemd-env-401542    │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p force-systemd-flag-595035 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-595035   │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ delete  │ -p offline-crio-330284                                                                                                      │ offline-crio-330284         │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p pause-374067 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-374067                │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p cert-expiration-362724 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-362724      │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ stop    │ stopped-upgrade-385309 stop                                                                                                 │ stopped-upgrade-385309      │ jenkins │ v1.32.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p stopped-upgrade-385309 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ stopped-upgrade-385309      │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ pause   │ -p pause-374067 --alsologtostderr -v=5                                                                                      │ pause-374067                │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:22:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:22:42.976583  220881 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:22:42.976717  220881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:22:42.976728  220881 out.go:374] Setting ErrFile to fd 2...
	I1124 09:22:42.976735  220881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:22:42.977492  220881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:22:42.978226  220881 out.go:368] Setting JSON to false
	I1124 09:22:42.980364  220881 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3909,"bootTime":1763972254,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:22:42.980447  220881 start.go:143] virtualization: kvm guest
	I1124 09:22:42.982247  220881 out.go:179] * [stopped-upgrade-385309] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:22:42.983844  220881 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:22:42.983878  220881 notify.go:221] Checking for updates...
	I1124 09:22:42.985856  220881 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:22:42.986898  220881 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:22:42.988061  220881 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:22:42.989255  220881 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:22:42.990395  220881 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:22:42.992147  220881 config.go:182] Loaded profile config "stopped-upgrade-385309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1124 09:22:42.994145  220881 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1124 09:22:42.995185  220881 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:22:43.025455  220881 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:22:43.025638  220881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:22:43.131386  220881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-24 09:22:43.110713662 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:22:43.131525  220881 docker.go:319] overlay module found
	I1124 09:22:43.134952  220881 out.go:179] * Using the docker driver based on existing profile
	I1124 09:22:41.197721  218120 addons.go:530] duration metric: took 110.38141ms for enable addons: enabled=[]
	I1124 09:22:41.197797  218120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:22:41.303955  218120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:22:41.316654  218120 node_ready.go:35] waiting up to 6m0s for node "pause-374067" to be "Ready" ...
	I1124 09:22:41.324884  218120 node_ready.go:49] node "pause-374067" is "Ready"
	I1124 09:22:41.324911  218120 node_ready.go:38] duration metric: took 8.227238ms for node "pause-374067" to be "Ready" ...
	I1124 09:22:41.324929  218120 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:22:41.324980  218120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:22:41.336240  218120 api_server.go:72] duration metric: took 248.993028ms to wait for apiserver process to appear ...
	I1124 09:22:41.336258  218120 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:22:41.336275  218120 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:22:41.341085  218120 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 09:22:41.341919  218120 api_server.go:141] control plane version: v1.34.2
	I1124 09:22:41.341942  218120 api_server.go:131] duration metric: took 5.677576ms to wait for apiserver health ...
	I1124 09:22:41.341952  218120 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:22:41.345077  218120 system_pods.go:59] 7 kube-system pods found
	I1124 09:22:41.345102  218120 system_pods.go:61] "coredns-66bc5c9577-skkdp" [7363696f-37d3-4e88-9725-81b0d8856d3c] Running
	I1124 09:22:41.345109  218120 system_pods.go:61] "etcd-pause-374067" [746e1952-ef35-434d-933a-40d11253112d] Running
	I1124 09:22:41.345116  218120 system_pods.go:61] "kindnet-4kv5p" [a4ea612d-af5f-4f10-96ec-4bbbd27f5176] Running
	I1124 09:22:41.345121  218120 system_pods.go:61] "kube-apiserver-pause-374067" [5ed58586-c875-47ef-8b98-b18ff6f6f6f7] Running
	I1124 09:22:41.345128  218120 system_pods.go:61] "kube-controller-manager-pause-374067" [2aff41ea-eb6f-45e0-9651-d6222f13147d] Running
	I1124 09:22:41.345135  218120 system_pods.go:61] "kube-proxy-4fcdr" [4b13baad-7b4e-4b8b-bebe-1464390054d7] Running
	I1124 09:22:41.345140  218120 system_pods.go:61] "kube-scheduler-pause-374067" [5a19238f-cb6f-46bf-8e66-08fc1f458d59] Running
	I1124 09:22:41.345151  218120 system_pods.go:74] duration metric: took 3.192801ms to wait for pod list to return data ...
	I1124 09:22:41.345159  218120 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:22:41.346805  218120 default_sa.go:45] found service account: "default"
	I1124 09:22:41.346824  218120 default_sa.go:55] duration metric: took 1.659179ms for default service account to be created ...
	I1124 09:22:41.346831  218120 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:22:41.348976  218120 system_pods.go:86] 7 kube-system pods found
	I1124 09:22:41.348997  218120 system_pods.go:89] "coredns-66bc5c9577-skkdp" [7363696f-37d3-4e88-9725-81b0d8856d3c] Running
	I1124 09:22:41.349002  218120 system_pods.go:89] "etcd-pause-374067" [746e1952-ef35-434d-933a-40d11253112d] Running
	I1124 09:22:41.349006  218120 system_pods.go:89] "kindnet-4kv5p" [a4ea612d-af5f-4f10-96ec-4bbbd27f5176] Running
	I1124 09:22:41.349009  218120 system_pods.go:89] "kube-apiserver-pause-374067" [5ed58586-c875-47ef-8b98-b18ff6f6f6f7] Running
	I1124 09:22:41.349013  218120 system_pods.go:89] "kube-controller-manager-pause-374067" [2aff41ea-eb6f-45e0-9651-d6222f13147d] Running
	I1124 09:22:41.349017  218120 system_pods.go:89] "kube-proxy-4fcdr" [4b13baad-7b4e-4b8b-bebe-1464390054d7] Running
	I1124 09:22:41.349022  218120 system_pods.go:89] "kube-scheduler-pause-374067" [5a19238f-cb6f-46bf-8e66-08fc1f458d59] Running
	I1124 09:22:41.349029  218120 system_pods.go:126] duration metric: took 2.193629ms to wait for k8s-apps to be running ...
	I1124 09:22:41.349037  218120 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:22:41.349078  218120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:22:41.361476  218120 system_svc.go:56] duration metric: took 12.43119ms WaitForService to wait for kubelet
	I1124 09:22:41.361498  218120 kubeadm.go:587] duration metric: took 274.25544ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:22:41.361512  218120 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:22:41.363839  218120 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:22:41.363861  218120 node_conditions.go:123] node cpu capacity is 8
	I1124 09:22:41.363876  218120 node_conditions.go:105] duration metric: took 2.359452ms to run NodePressure ...
	I1124 09:22:41.363891  218120 start.go:242] waiting for startup goroutines ...
	I1124 09:22:41.363906  218120 start.go:247] waiting for cluster config update ...
	I1124 09:22:41.363920  218120 start.go:256] writing updated cluster config ...
	I1124 09:22:41.448515  218120 ssh_runner.go:195] Run: rm -f paused
	I1124 09:22:41.452831  218120 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:22:41.453251  218120 kapi.go:59] client config for pause-374067: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-5690/.minikube/profiles/pause-374067/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-5690/.minikube/profiles/pause-374067/client.key", CAFile:"/home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:22:41.456062  218120 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-skkdp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.460709  218120 pod_ready.go:94] pod "coredns-66bc5c9577-skkdp" is "Ready"
	I1124 09:22:41.460727  218120 pod_ready.go:86] duration metric: took 4.642254ms for pod "coredns-66bc5c9577-skkdp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.462890  218120 pod_ready.go:83] waiting for pod "etcd-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.468503  218120 pod_ready.go:94] pod "etcd-pause-374067" is "Ready"
	I1124 09:22:41.468528  218120 pod_ready.go:86] duration metric: took 5.619458ms for pod "etcd-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.470178  218120 pod_ready.go:83] waiting for pod "kube-apiserver-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.473822  218120 pod_ready.go:94] pod "kube-apiserver-pause-374067" is "Ready"
	I1124 09:22:41.473845  218120 pod_ready.go:86] duration metric: took 3.650045ms for pod "kube-apiserver-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.475522  218120 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.856182  218120 pod_ready.go:94] pod "kube-controller-manager-pause-374067" is "Ready"
	I1124 09:22:41.856216  218120 pod_ready.go:86] duration metric: took 380.674306ms for pod "kube-controller-manager-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:42.056305  218120 pod_ready.go:83] waiting for pod "kube-proxy-4fcdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:42.458240  218120 pod_ready.go:94] pod "kube-proxy-4fcdr" is "Ready"
	I1124 09:22:42.458266  218120 pod_ready.go:86] duration metric: took 401.93848ms for pod "kube-proxy-4fcdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:42.657956  218120 pod_ready.go:83] waiting for pod "kube-scheduler-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.059276  218120 pod_ready.go:94] pod "kube-scheduler-pause-374067" is "Ready"
	I1124 09:22:43.059300  218120 pod_ready.go:86] duration metric: took 401.319414ms for pod "kube-scheduler-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.059313  218120 pod_ready.go:40] duration metric: took 1.606450666s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:22:43.143251  218120 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:22:43.144964  218120 out.go:179] * Done! kubectl is now configured to use "pause-374067" cluster and "default" namespace by default
	I1124 09:22:43.136478  220881 start.go:309] selected driver: docker
	I1124 09:22:43.136494  220881 start.go:927] validating driver "docker" against &{Name:stopped-upgrade-385309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-385309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:22:43.136597  220881 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:22:43.137435  220881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:22:43.241892  220881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 09:22:43.227598154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:22:43.242478  220881 cni.go:84] Creating CNI manager for ""
	I1124 09:22:43.242627  220881 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:22:43.242734  220881 start.go:353] cluster config:
	{Name:stopped-upgrade-385309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-385309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1124 09:22:43.245174  220881 out.go:179] * Starting "stopped-upgrade-385309" primary control-plane node in "stopped-upgrade-385309" cluster
	I1124 09:22:43.246236  220881 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:22:43.248307  220881 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:22:43.254082  220881 preload.go:188] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1124 09:22:43.254245  220881 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1124 09:22:43.254792  220881 cache.go:65] Caching tarball of preloaded images
	I1124 09:22:43.254918  220881 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:22:43.254972  220881 cache.go:68] Finished verifying existence of preloaded tar for v1.28.3 on crio
	I1124 09:22:43.255128  220881 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/stopped-upgrade-385309/config.json ...
	I1124 09:22:43.254164  220881 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1124 09:22:43.279047  220881 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1124 09:22:43.279277  220881 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1124 09:22:43.279295  220881 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1124 09:22:43.279302  220881 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1124 09:22:43.279327  220881 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1124 09:22:43.279378  220881 cache.go:176] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1124 09:22:43.378170  220881 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from cached tarball
	I1124 09:22:43.378221  220881 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:22:43.378267  220881 start.go:360] acquireMachinesLock for stopped-upgrade-385309: {Name:mk9d8748ac1e371ea38d4df7aaf13ce72c77f655 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:22:43.378390  220881 start.go:364] duration metric: took 96.076µs to acquireMachinesLock for "stopped-upgrade-385309"
	I1124 09:22:43.378419  220881 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:22:43.378426  220881 fix.go:54] fixHost starting: 
	I1124 09:22:43.378785  220881 cli_runner.go:164] Run: docker container inspect stopped-upgrade-385309 --format={{.State.Status}}
	I1124 09:22:43.407956  220881 fix.go:112] recreateIfNeeded on stopped-upgrade-385309: state=Stopped err=<nil>
	W1124 09:22:43.407990  220881 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.573409994Z" level=info msg="RDT not available in the host system"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.57342628Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.57424771Z" level=info msg="Conmon does support the --sync option"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.57426545Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.574277757Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.57503427Z" level=info msg="Conmon does support the --sync option"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.575051441Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.579274449Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.579299655Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.580001524Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.580388658Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.580462511Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.672926776Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-skkdp Namespace:kube-system ID:7a2da4d54c80d2a2351e9c974d3cf144eca92f243c68e57682e6d8bd13d03c69 UID:7363696f-37d3-4e88-9725-81b0d8856d3c NetNS:/var/run/netns/1df8f6b8-b794-4a6b-865c-9606c4d804ae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003343d0}] Aliases:map[]}"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673112277Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-skkdp for CNI network kindnet (type=ptp)"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673570354Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673596596Z" level=info msg="Starting seccomp notifier watcher"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673648023Z" level=info msg="Create NRI interface"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673769409Z" level=info msg="built-in NRI default validator is disabled"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673836827Z" level=info msg="runtime interface created"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673857459Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.67386652Z" level=info msg="runtime interface starting up..."
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673874142Z" level=info msg="starting plugins..."
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673889066Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.674302134Z" level=info msg="No systemd watchdog enabled"
	Nov 24 09:22:38 pause-374067 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	aa29ee9f85ceb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   7a2da4d54c80d       coredns-66bc5c9577-skkdp               kube-system
	18568b03fc3a0       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   24 seconds ago      Running             kube-proxy                0                   61bcf2c1be66e       kube-proxy-4fcdr                       kube-system
	563770d0bdcac       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   5e2c8ba00b8f8       kindnet-4kv5p                          kube-system
	b02924f1926e4       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   36 seconds ago      Running             kube-scheduler            0                   4f3d4274233b2       kube-scheduler-pause-374067            kube-system
	b769a8cbb7f80       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   36 seconds ago      Running             etcd                      0                   61090ed697cbe       etcd-pause-374067                      kube-system
	8914a013c2342       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   36 seconds ago      Running             kube-controller-manager   0                   5593f806fdb6f       kube-controller-manager-pause-374067   kube-system
	52fc240f04573       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   36 seconds ago      Running             kube-apiserver            0                   868fb73e453bd       kube-apiserver-pause-374067            kube-system
	
	
	==> coredns [aa29ee9f85cebf74baa27cf3e80679170de62efb51dbc4a90fcccc2d42304736] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52081 - 17876 "HINFO IN 8164539038983661835.4592159594530109976. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.445493544s
	
	
	==> describe nodes <==
	Name:               pause-374067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-374067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=pause-374067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_22_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-374067
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:22:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:22:32 +0000   Mon, 24 Nov 2025 09:22:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:22:32 +0000   Mon, 24 Nov 2025 09:22:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:22:32 +0000   Mon, 24 Nov 2025 09:22:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:22:32 +0000   Mon, 24 Nov 2025 09:22:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-374067
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                0636ad55-ab75-4c10-a930-55d03b4df3d8
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-skkdp                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-374067                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-4kv5p                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-374067             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-374067    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-4fcdr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-374067             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node pause-374067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node pause-374067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node pause-374067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node pause-374067 event: Registered Node pause-374067 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-374067 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081417] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024229] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.472063] kauditd_printk_skb: 47 callbacks suppressed
	[Nov24 08:31] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.027365] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.023898] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.024840] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.022897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +4.031610] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +8.191119] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[ +16.382253] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 08:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	
	
	==> etcd [b769a8cbb7f808fcea81adf919a6a8b785de217bddb177e810fe547c4123bcfa] <==
	{"level":"warn","ts":"2025-11-24T09:22:10.969726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:10.987101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:10.994290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.025786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.035432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.051680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.062354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.079837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.091789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.100510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.168551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:18.617439Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.172187ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766384176092380 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:22:18.617580Z","caller":"traceutil/trace.go:172","msg":"trace[76623225] transaction","detail":"{read_only:false; response_revision:292; number_of_response:1; }","duration":"226.546372ms","start":"2025-11-24T09:22:18.391015Z","end":"2025-11-24T09:22:18.617562Z","steps":["trace[76623225] 'process raft request'  (duration: 103.964598ms)","trace[76623225] 'compare'  (duration: 122.092965ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:22:21.112445Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.535593ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T09:22:21.112521Z","caller":"traceutil/trace.go:172","msg":"trace[436615694] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:365; }","duration":"157.621352ms","start":"2025-11-24T09:22:20.954882Z","end":"2025-11-24T09:22:21.112503Z","steps":["trace[436615694] 'agreement among raft nodes before linearized reading'  (duration: 36.533193ms)","trace[436615694] 'range keys from in-memory index tree'  (duration: 120.982925ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:22:21.112646Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.191437ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766384176092561 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-4fcdr.187ae6f445f9871b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-4fcdr.187ae6f445f9871b\" value_size:682 lease:6571766384176092231 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:22:21.112840Z","caller":"traceutil/trace.go:172","msg":"trace[517862459] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"163.605247ms","start":"2025-11-24T09:22:20.949205Z","end":"2025-11-24T09:22:21.112810Z","steps":["trace[517862459] 'process raft request'  (duration: 42.21067ms)","trace[517862459] 'compare'  (duration: 120.896316ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:22:21.113059Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.355999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-374067\" limit:1 ","response":"range_response_count:1 size:5582"}
	{"level":"warn","ts":"2025-11-24T09:22:21.113898Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.154064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4400"}
	{"level":"info","ts":"2025-11-24T09:22:21.116260Z","caller":"traceutil/trace.go:172","msg":"trace[1731630453] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:366; }","duration":"103.511376ms","start":"2025-11-24T09:22:21.012730Z","end":"2025-11-24T09:22:21.116242Z","steps":["trace[1731630453] 'agreement among raft nodes before linearized reading'  (duration: 101.080302ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:22:21.115771Z","caller":"traceutil/trace.go:172","msg":"trace[128131774] range","detail":"{range_begin:/registry/minions/pause-374067; range_end:; response_count:1; response_revision:366; }","duration":"103.057298ms","start":"2025-11-24T09:22:21.012689Z","end":"2025-11-24T09:22:21.115746Z","steps":["trace[128131774] 'agreement among raft nodes before linearized reading'  (duration: 100.270074ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:22:21.278559Z","caller":"traceutil/trace.go:172","msg":"trace[1484584899] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"138.882786ms","start":"2025-11-24T09:22:21.139655Z","end":"2025-11-24T09:22:21.278537Z","steps":["trace[1484584899] 'process raft request'  (duration: 138.352236ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:22:21.537860Z","caller":"traceutil/trace.go:172","msg":"trace[426610543] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"138.905549ms","start":"2025-11-24T09:22:21.398935Z","end":"2025-11-24T09:22:21.537840Z","steps":["trace[426610543] 'process raft request'  (duration: 116.794987ms)","trace[426610543] 'compare'  (duration: 22.020475ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:22:21.717876Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.127539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-4fcdr\" limit:1 ","response":"range_response_count:1 size:5034"}
	{"level":"info","ts":"2025-11-24T09:22:21.717944Z","caller":"traceutil/trace.go:172","msg":"trace[1331020490] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-4fcdr; range_end:; response_count:1; response_revision:371; }","duration":"117.206061ms","start":"2025-11-24T09:22:21.600721Z","end":"2025-11-24T09:22:21.717927Z","steps":["trace[1331020490] 'agreement among raft nodes before linearized reading'  (duration: 69.185083ms)","trace[1331020490] 'range keys from in-memory index tree'  (duration: 47.832628ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:22:46 up  1:05,  0 user,  load average: 4.66, 2.18, 1.34
	Linux pause-374067 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [563770d0bdcaca8df60133fb589e3f8410e97a0a80741909cf32147018fb90a7] <==
	I1124 09:22:21.488493       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:22:21.488816       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 09:22:21.488932       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:22:21.488946       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:22:21.488963       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:22:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:22:21.804213       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:22:21.804246       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:22:21.804259       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:22:21.804461       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:22:22.104750       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:22:22.104798       1 metrics.go:72] Registering metrics
	I1124 09:22:22.104879       1 controller.go:711] "Syncing nftables rules"
	I1124 09:22:31.808431       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:22:31.808496       1 main.go:301] handling current node
	I1124 09:22:41.812070       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:22:41.812105       1 main.go:301] handling current node
	
	
	==> kube-apiserver [52fc240f0457397d561acf64bb27037c43f32a53bfb4416c6ad15817de9b3d61] <==
	I1124 09:22:12.105481       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 09:22:12.105848       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 09:22:12.109848       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:22:12.114418       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:22:12.136609       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:22:12.138041       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 09:22:12.149914       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:22:12.151457       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:22:12.920281       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:22:12.928568       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:22:12.928592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:22:13.568812       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:22:13.606400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:22:13.744204       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:22:13.751253       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 09:22:13.752463       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:22:13.756645       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:22:13.994598       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:22:14.662887       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:22:14.677870       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:22:14.691250       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:22:19.748939       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:22:19.796580       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:22:19.801235       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:22:19.946426       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8914a013c234229f7106c5d4ac049cca62019009b8a979764da4eaca6c996dd1] <==
	I1124 09:22:19.012723       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-374067" podCIDRs=["10.244.0.0/24"]
	I1124 09:22:19.019911       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 09:22:19.030087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:22:19.038371       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:22:19.039605       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:22:19.042866       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 09:22:19.043590       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 09:22:19.043592       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 09:22:19.043745       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:22:19.044017       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:22:19.044095       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 09:22:19.044237       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 09:22:19.044868       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:22:19.044944       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 09:22:19.045042       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:22:19.046259       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 09:22:19.046283       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:22:19.046349       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:22:19.046364       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 09:22:19.046394       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 09:22:19.046441       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 09:22:19.052049       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 09:22:19.058721       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 09:22:19.062170       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:22:33.995497       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [18568b03fc3a07fc46ebb0f16b57658c50f8fa5f1791cee2b802d57da67d4304] <==
	I1124 09:22:21.344347       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:22:21.415578       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:22:21.515986       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:22:21.516029       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 09:22:21.516194       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:22:21.560031       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:22:21.560164       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:22:21.569823       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:22:21.570388       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:22:21.570411       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:22:21.573094       1 config.go:200] "Starting service config controller"
	I1124 09:22:21.573113       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:22:21.574399       1 config.go:309] "Starting node config controller"
	I1124 09:22:21.575374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:22:21.575394       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:22:21.575169       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:22:21.575405       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:22:21.575155       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:22:21.575479       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:22:21.674123       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:22:21.676523       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:22:21.676523       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b02924f1926e4c0fc93eb2c175e5bfccd26af5cf344df926d0776e32bf3d38a6] <==
	E1124 09:22:12.058446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:22:12.058496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:22:12.058626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:22:12.058746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:22:12.058771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:22:12.058933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:22:12.059179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:22:12.059426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:22:12.059463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:22:12.060407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:22:12.060416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:22:12.060559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:22:12.874842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:22:12.893432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 09:22:12.900633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:22:12.952250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:22:12.977284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:22:12.999444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:22:13.000847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:22:13.003380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:22:13.054135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:22:13.139722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:22:13.188400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:22:13.310106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1124 09:22:15.848383       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:22:19 pause-374067 kubelet[1308]: E1124 09:22:19.980880    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 09:22:19 pause-374067 kubelet[1308]: E1124 09:22:19.981083    1308 projected.go:196] Error preparing data for projected volume kube-api-access-c8ls6 for pod kube-system/kindnet-4kv5p: configmap "kube-root-ca.crt" not found
	Nov 24 09:22:19 pause-374067 kubelet[1308]: E1124 09:22:19.981162    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4ea612d-af5f-4f10-96ec-4bbbd27f5176-kube-api-access-c8ls6 podName:a4ea612d-af5f-4f10-96ec-4bbbd27f5176 nodeName:}" failed. No retries permitted until 2025-11-24 09:22:20.48113871 +0000 UTC m=+6.033076998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c8ls6" (UniqueName: "kubernetes.io/projected/a4ea612d-af5f-4f10-96ec-4bbbd27f5176-kube-api-access-c8ls6") pod "kindnet-4kv5p" (UID: "a4ea612d-af5f-4f10-96ec-4bbbd27f5176") : configmap "kube-root-ca.crt" not found
	Nov 24 09:22:21 pause-374067 kubelet[1308]: I1124 09:22:21.801213    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4fcdr" podStartSLOduration=2.801190925 podStartE2EDuration="2.801190925s" podCreationTimestamp="2025-11-24 09:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:22:21.801071672 +0000 UTC m=+7.353009983" watchObservedRunningTime="2025-11-24 09:22:21.801190925 +0000 UTC m=+7.353129234"
	Nov 24 09:22:22 pause-374067 kubelet[1308]: I1124 09:22:22.523724    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4kv5p" podStartSLOduration=3.523699002 podStartE2EDuration="3.523699002s" podCreationTimestamp="2025-11-24 09:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:22:21.877804486 +0000 UTC m=+7.429742792" watchObservedRunningTime="2025-11-24 09:22:22.523699002 +0000 UTC m=+8.075637311"
	Nov 24 09:22:32 pause-374067 kubelet[1308]: I1124 09:22:32.218274    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 09:22:32 pause-374067 kubelet[1308]: I1124 09:22:32.266270    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7363696f-37d3-4e88-9725-81b0d8856d3c-config-volume\") pod \"coredns-66bc5c9577-skkdp\" (UID: \"7363696f-37d3-4e88-9725-81b0d8856d3c\") " pod="kube-system/coredns-66bc5c9577-skkdp"
	Nov 24 09:22:32 pause-374067 kubelet[1308]: I1124 09:22:32.266322    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnqz8\" (UniqueName: \"kubernetes.io/projected/7363696f-37d3-4e88-9725-81b0d8856d3c-kube-api-access-jnqz8\") pod \"coredns-66bc5c9577-skkdp\" (UID: \"7363696f-37d3-4e88-9725-81b0d8856d3c\") " pod="kube-system/coredns-66bc5c9577-skkdp"
	Nov 24 09:22:32 pause-374067 kubelet[1308]: I1124 09:22:32.649431    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-skkdp" podStartSLOduration=12.649407925 podStartE2EDuration="12.649407925s" podCreationTimestamp="2025-11-24 09:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:22:32.646505943 +0000 UTC m=+18.198444286" watchObservedRunningTime="2025-11-24 09:22:32.649407925 +0000 UTC m=+18.201346234"
	Nov 24 09:22:36 pause-374067 kubelet[1308]: W1124 09:22:36.638905    1308 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 09:22:36 pause-374067 kubelet[1308]: E1124 09:22:36.638982    1308 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 24 09:22:36 pause-374067 kubelet[1308]: E1124 09:22:36.639019    1308 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:36 pause-374067 kubelet[1308]: E1124 09:22:36.639031    1308 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:38 pause-374067 kubelet[1308]: W1124 09:22:38.572810    1308 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.572903    1308 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.572981    1308 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.573005    1308 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.573023    1308 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.644369    1308 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.644424    1308 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.644439    1308 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:43 pause-374067 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:22:43 pause-374067 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:22:43 pause-374067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:22:43 pause-374067 systemd[1]: kubelet.service: Consumed 1.254s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-374067 -n pause-374067
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-374067 -n pause-374067: exit status 2 (346.122632ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-374067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-374067
helpers_test.go:243: (dbg) docker inspect pause-374067:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834",
	        "Created": "2025-11-24T09:21:53.706624648Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 206702,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:21:53.823970065Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834/hosts",
	        "LogPath": "/var/lib/docker/containers/fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834/fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834-json.log",
	        "Name": "/pause-374067",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-374067:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-374067",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc012686e50b945e4005b70563fc4e6ee563bf2689b6d967ea20d56a42017834",
	                "LowerDir": "/var/lib/docker/overlay2/25f40940efa5bea63498d739cfe4dd66a21c1b2ea3a608f67499dd8613582d95-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25f40940efa5bea63498d739cfe4dd66a21c1b2ea3a608f67499dd8613582d95/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25f40940efa5bea63498d739cfe4dd66a21c1b2ea3a608f67499dd8613582d95/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25f40940efa5bea63498d739cfe4dd66a21c1b2ea3a608f67499dd8613582d95/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-374067",
	                "Source": "/var/lib/docker/volumes/pause-374067/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-374067",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-374067",
	                "name.minikube.sigs.k8s.io": "pause-374067",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "88d14c65728ce8eae49ed261117d140fb9387bffd2cc09af11aece1c5d0e0a2e",
	            "SandboxKey": "/var/run/docker/netns/88d14c65728c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-374067": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "207fa621dc590e92b376cf58bb9028748fb46ca63b98e9831ad9de96dd8dddac",
	                    "EndpointID": "cc3a21a15e49c4940aa0327dace069cf31947101cdd52f6b2817b025bd91a628",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b2:9c:5a:5e:cb:9f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-374067",
	                        "fc012686e50b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-374067 -n pause-374067
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-374067 -n pause-374067: exit status 2 (370.219457ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-374067 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-374067 logs -n 25: (1.08097467s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-310817 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --cancel-scheduled                                                                                 │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │ 24 Nov 25 09:20 UTC │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │                     │
	│ stop    │ -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:20 UTC │ 24 Nov 25 09:21 UTC │
	│ delete  │ -p scheduled-stop-310817                                                                                                    │ scheduled-stop-310817       │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ start   │ -p insufficient-storage-761969 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-761969 │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │                     │
	│ delete  │ -p insufficient-storage-761969                                                                                              │ insufficient-storage-761969 │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ start   │ -p force-systemd-env-401542 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-401542    │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p pause-374067 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-374067                │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p offline-crio-330284 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-330284         │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p stopped-upgrade-385309 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-385309      │ jenkins │ v1.32.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:22 UTC │
	│ delete  │ -p force-systemd-env-401542                                                                                                 │ force-systemd-env-401542    │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p force-systemd-flag-595035 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-595035   │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ delete  │ -p offline-crio-330284                                                                                                      │ offline-crio-330284         │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p pause-374067 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-374067                │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p cert-expiration-362724 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-362724      │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ stop    │ stopped-upgrade-385309 stop                                                                                                 │ stopped-upgrade-385309      │ jenkins │ v1.32.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ start   │ -p stopped-upgrade-385309 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ stopped-upgrade-385309      │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ pause   │ -p pause-374067 --alsologtostderr -v=5                                                                                      │ pause-374067                │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:22:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:22:42.976583  220881 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:22:42.976717  220881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:22:42.976728  220881 out.go:374] Setting ErrFile to fd 2...
	I1124 09:22:42.976735  220881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:22:42.977492  220881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:22:42.978226  220881 out.go:368] Setting JSON to false
	I1124 09:22:42.980364  220881 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3909,"bootTime":1763972254,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:22:42.980447  220881 start.go:143] virtualization: kvm guest
	I1124 09:22:42.982247  220881 out.go:179] * [stopped-upgrade-385309] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:22:42.983844  220881 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:22:42.983878  220881 notify.go:221] Checking for updates...
	I1124 09:22:42.985856  220881 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:22:42.986898  220881 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:22:42.988061  220881 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:22:42.989255  220881 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:22:42.990395  220881 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:22:42.992147  220881 config.go:182] Loaded profile config "stopped-upgrade-385309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1124 09:22:42.994145  220881 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1124 09:22:42.995185  220881 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:22:43.025455  220881 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:22:43.025638  220881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:22:43.131386  220881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-24 09:22:43.110713662 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:22:43.131525  220881 docker.go:319] overlay module found
	I1124 09:22:43.134952  220881 out.go:179] * Using the docker driver based on existing profile
	I1124 09:22:41.197721  218120 addons.go:530] duration metric: took 110.38141ms for enable addons: enabled=[]
	I1124 09:22:41.197797  218120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:22:41.303955  218120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:22:41.316654  218120 node_ready.go:35] waiting up to 6m0s for node "pause-374067" to be "Ready" ...
	I1124 09:22:41.324884  218120 node_ready.go:49] node "pause-374067" is "Ready"
	I1124 09:22:41.324911  218120 node_ready.go:38] duration metric: took 8.227238ms for node "pause-374067" to be "Ready" ...
	I1124 09:22:41.324929  218120 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:22:41.324980  218120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:22:41.336240  218120 api_server.go:72] duration metric: took 248.993028ms to wait for apiserver process to appear ...
	I1124 09:22:41.336258  218120 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:22:41.336275  218120 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:22:41.341085  218120 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 09:22:41.341919  218120 api_server.go:141] control plane version: v1.34.2
	I1124 09:22:41.341942  218120 api_server.go:131] duration metric: took 5.677576ms to wait for apiserver health ...
	I1124 09:22:41.341952  218120 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:22:41.345077  218120 system_pods.go:59] 7 kube-system pods found
	I1124 09:22:41.345102  218120 system_pods.go:61] "coredns-66bc5c9577-skkdp" [7363696f-37d3-4e88-9725-81b0d8856d3c] Running
	I1124 09:22:41.345109  218120 system_pods.go:61] "etcd-pause-374067" [746e1952-ef35-434d-933a-40d11253112d] Running
	I1124 09:22:41.345116  218120 system_pods.go:61] "kindnet-4kv5p" [a4ea612d-af5f-4f10-96ec-4bbbd27f5176] Running
	I1124 09:22:41.345121  218120 system_pods.go:61] "kube-apiserver-pause-374067" [5ed58586-c875-47ef-8b98-b18ff6f6f6f7] Running
	I1124 09:22:41.345128  218120 system_pods.go:61] "kube-controller-manager-pause-374067" [2aff41ea-eb6f-45e0-9651-d6222f13147d] Running
	I1124 09:22:41.345135  218120 system_pods.go:61] "kube-proxy-4fcdr" [4b13baad-7b4e-4b8b-bebe-1464390054d7] Running
	I1124 09:22:41.345140  218120 system_pods.go:61] "kube-scheduler-pause-374067" [5a19238f-cb6f-46bf-8e66-08fc1f458d59] Running
	I1124 09:22:41.345151  218120 system_pods.go:74] duration metric: took 3.192801ms to wait for pod list to return data ...
	I1124 09:22:41.345159  218120 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:22:41.346805  218120 default_sa.go:45] found service account: "default"
	I1124 09:22:41.346824  218120 default_sa.go:55] duration metric: took 1.659179ms for default service account to be created ...
	I1124 09:22:41.346831  218120 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:22:41.348976  218120 system_pods.go:86] 7 kube-system pods found
	I1124 09:22:41.348997  218120 system_pods.go:89] "coredns-66bc5c9577-skkdp" [7363696f-37d3-4e88-9725-81b0d8856d3c] Running
	I1124 09:22:41.349002  218120 system_pods.go:89] "etcd-pause-374067" [746e1952-ef35-434d-933a-40d11253112d] Running
	I1124 09:22:41.349006  218120 system_pods.go:89] "kindnet-4kv5p" [a4ea612d-af5f-4f10-96ec-4bbbd27f5176] Running
	I1124 09:22:41.349009  218120 system_pods.go:89] "kube-apiserver-pause-374067" [5ed58586-c875-47ef-8b98-b18ff6f6f6f7] Running
	I1124 09:22:41.349013  218120 system_pods.go:89] "kube-controller-manager-pause-374067" [2aff41ea-eb6f-45e0-9651-d6222f13147d] Running
	I1124 09:22:41.349017  218120 system_pods.go:89] "kube-proxy-4fcdr" [4b13baad-7b4e-4b8b-bebe-1464390054d7] Running
	I1124 09:22:41.349022  218120 system_pods.go:89] "kube-scheduler-pause-374067" [5a19238f-cb6f-46bf-8e66-08fc1f458d59] Running
	I1124 09:22:41.349029  218120 system_pods.go:126] duration metric: took 2.193629ms to wait for k8s-apps to be running ...
	I1124 09:22:41.349037  218120 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:22:41.349078  218120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:22:41.361476  218120 system_svc.go:56] duration metric: took 12.43119ms WaitForService to wait for kubelet
	I1124 09:22:41.361498  218120 kubeadm.go:587] duration metric: took 274.25544ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:22:41.361512  218120 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:22:41.363839  218120 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:22:41.363861  218120 node_conditions.go:123] node cpu capacity is 8
	I1124 09:22:41.363876  218120 node_conditions.go:105] duration metric: took 2.359452ms to run NodePressure ...
	I1124 09:22:41.363891  218120 start.go:242] waiting for startup goroutines ...
	I1124 09:22:41.363906  218120 start.go:247] waiting for cluster config update ...
	I1124 09:22:41.363920  218120 start.go:256] writing updated cluster config ...
	I1124 09:22:41.448515  218120 ssh_runner.go:195] Run: rm -f paused
	I1124 09:22:41.452831  218120 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:22:41.453251  218120 kapi.go:59] client config for pause-374067: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-5690/.minikube/profiles/pause-374067/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-5690/.minikube/profiles/pause-374067/client.key", CAFile:"/home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:22:41.456062  218120 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-skkdp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.460709  218120 pod_ready.go:94] pod "coredns-66bc5c9577-skkdp" is "Ready"
	I1124 09:22:41.460727  218120 pod_ready.go:86] duration metric: took 4.642254ms for pod "coredns-66bc5c9577-skkdp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.462890  218120 pod_ready.go:83] waiting for pod "etcd-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.468503  218120 pod_ready.go:94] pod "etcd-pause-374067" is "Ready"
	I1124 09:22:41.468528  218120 pod_ready.go:86] duration metric: took 5.619458ms for pod "etcd-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.470178  218120 pod_ready.go:83] waiting for pod "kube-apiserver-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.473822  218120 pod_ready.go:94] pod "kube-apiserver-pause-374067" is "Ready"
	I1124 09:22:41.473845  218120 pod_ready.go:86] duration metric: took 3.650045ms for pod "kube-apiserver-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.475522  218120 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:41.856182  218120 pod_ready.go:94] pod "kube-controller-manager-pause-374067" is "Ready"
	I1124 09:22:41.856216  218120 pod_ready.go:86] duration metric: took 380.674306ms for pod "kube-controller-manager-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:42.056305  218120 pod_ready.go:83] waiting for pod "kube-proxy-4fcdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:42.458240  218120 pod_ready.go:94] pod "kube-proxy-4fcdr" is "Ready"
	I1124 09:22:42.458266  218120 pod_ready.go:86] duration metric: took 401.93848ms for pod "kube-proxy-4fcdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:42.657956  218120 pod_ready.go:83] waiting for pod "kube-scheduler-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.059276  218120 pod_ready.go:94] pod "kube-scheduler-pause-374067" is "Ready"
	I1124 09:22:43.059300  218120 pod_ready.go:86] duration metric: took 401.319414ms for pod "kube-scheduler-pause-374067" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.059313  218120 pod_ready.go:40] duration metric: took 1.606450666s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:22:43.143251  218120 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:22:43.144964  218120 out.go:179] * Done! kubectl is now configured to use "pause-374067" cluster and "default" namespace by default
	I1124 09:22:43.136478  220881 start.go:309] selected driver: docker
	I1124 09:22:43.136494  220881 start.go:927] validating driver "docker" against &{Name:stopped-upgrade-385309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-385309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:22:43.136597  220881 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:22:43.137435  220881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:22:43.241892  220881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 09:22:43.227598154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:22:43.242478  220881 cni.go:84] Creating CNI manager for ""
	I1124 09:22:43.242627  220881 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:22:43.242734  220881 start.go:353] cluster config:
	{Name:stopped-upgrade-385309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-385309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1124 09:22:43.245174  220881 out.go:179] * Starting "stopped-upgrade-385309" primary control-plane node in "stopped-upgrade-385309" cluster
	I1124 09:22:43.246236  220881 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:22:43.248307  220881 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:22:43.254082  220881 preload.go:188] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1124 09:22:43.254245  220881 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1124 09:22:43.254792  220881 cache.go:65] Caching tarball of preloaded images
	I1124 09:22:43.254918  220881 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:22:43.254972  220881 cache.go:68] Finished verifying existence of preloaded tar for v1.28.3 on crio
	I1124 09:22:43.255128  220881 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/stopped-upgrade-385309/config.json ...
	I1124 09:22:43.254164  220881 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1124 09:22:43.279047  220881 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1124 09:22:43.279277  220881 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1124 09:22:43.279295  220881 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1124 09:22:43.279302  220881 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1124 09:22:43.279327  220881 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1124 09:22:43.279378  220881 cache.go:176] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1124 09:22:43.378170  220881 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from cached tarball
	I1124 09:22:43.378221  220881 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:22:43.378267  220881 start.go:360] acquireMachinesLock for stopped-upgrade-385309: {Name:mk9d8748ac1e371ea38d4df7aaf13ce72c77f655 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:22:43.378390  220881 start.go:364] duration metric: took 96.076µs to acquireMachinesLock for "stopped-upgrade-385309"
	I1124 09:22:43.378419  220881 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:22:43.378426  220881 fix.go:54] fixHost starting: 
	I1124 09:22:43.378785  220881 cli_runner.go:164] Run: docker container inspect stopped-upgrade-385309 --format={{.State.Status}}
	I1124 09:22:43.407956  220881 fix.go:112] recreateIfNeeded on stopped-upgrade-385309: state=Stopped err=<nil>
	W1124 09:22:43.407990  220881 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:22:44.168642  213831 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.869598314s
	I1124 09:22:44.578702  213831 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.288282588s
	I1124 09:22:46.292213  213831 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001855103s
	I1124 09:22:46.309210  213831 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:22:46.318646  213831 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:22:46.330312  213831 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:22:46.330645  213831 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-flag-595035 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:22:46.339558  213831 kubeadm.go:319] [bootstrap-token] Using token: hgjfz4.jtbrgvttwkwxxxh3
	
	
	==> CRI-O <==
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.573409994Z" level=info msg="RDT not available in the host system"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.57342628Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.57424771Z" level=info msg="Conmon does support the --sync option"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.57426545Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.574277757Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.57503427Z" level=info msg="Conmon does support the --sync option"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.575051441Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.579274449Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.579299655Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.580001524Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.580388658Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.580462511Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.672926776Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-skkdp Namespace:kube-system ID:7a2da4d54c80d2a2351e9c974d3cf144eca92f243c68e57682e6d8bd13d03c69 UID:7363696f-37d3-4e88-9725-81b0d8856d3c NetNS:/var/run/netns/1df8f6b8-b794-4a6b-865c-9606c4d804ae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003343d0}] Aliases:map[]}"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673112277Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-skkdp for CNI network kindnet (type=ptp)"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673570354Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673596596Z" level=info msg="Starting seccomp notifier watcher"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673648023Z" level=info msg="Create NRI interface"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673769409Z" level=info msg="built-in NRI default validator is disabled"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673836827Z" level=info msg="runtime interface created"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673857459Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.67386652Z" level=info msg="runtime interface starting up..."
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673874142Z" level=info msg="starting plugins..."
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.673889066Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 09:22:38 pause-374067 crio[2183]: time="2025-11-24T09:22:38.674302134Z" level=info msg="No systemd watchdog enabled"
	Nov 24 09:22:38 pause-374067 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	aa29ee9f85ceb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   7a2da4d54c80d       coredns-66bc5c9577-skkdp               kube-system
	18568b03fc3a0       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   26 seconds ago      Running             kube-proxy                0                   61bcf2c1be66e       kube-proxy-4fcdr                       kube-system
	563770d0bdcac       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   26 seconds ago      Running             kindnet-cni               0                   5e2c8ba00b8f8       kindnet-4kv5p                          kube-system
	b02924f1926e4       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   38 seconds ago      Running             kube-scheduler            0                   4f3d4274233b2       kube-scheduler-pause-374067            kube-system
	b769a8cbb7f80       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   38 seconds ago      Running             etcd                      0                   61090ed697cbe       etcd-pause-374067                      kube-system
	8914a013c2342       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   38 seconds ago      Running             kube-controller-manager   0                   5593f806fdb6f       kube-controller-manager-pause-374067   kube-system
	52fc240f04573       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   38 seconds ago      Running             kube-apiserver            0                   868fb73e453bd       kube-apiserver-pause-374067            kube-system
	
	
	==> coredns [aa29ee9f85cebf74baa27cf3e80679170de62efb51dbc4a90fcccc2d42304736] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52081 - 17876 "HINFO IN 8164539038983661835.4592159594530109976. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.445493544s
	
	
	==> describe nodes <==
	Name:               pause-374067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-374067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=pause-374067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_22_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-374067
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:22:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:22:32 +0000   Mon, 24 Nov 2025 09:22:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:22:32 +0000   Mon, 24 Nov 2025 09:22:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:22:32 +0000   Mon, 24 Nov 2025 09:22:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:22:32 +0000   Mon, 24 Nov 2025 09:22:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-374067
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                0636ad55-ab75-4c10-a930-55d03b4df3d8
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-skkdp                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-374067                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-4kv5p                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-374067             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-374067    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-4fcdr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-374067             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node pause-374067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node pause-374067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node pause-374067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node pause-374067 event: Registered Node pause-374067 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-374067 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081417] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024229] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.472063] kauditd_printk_skb: 47 callbacks suppressed
	[Nov24 08:31] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.027365] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.023898] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.024840] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.022897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +4.031610] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[  +8.191119] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[ +16.382253] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 08:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	
	
	==> etcd [b769a8cbb7f808fcea81adf919a6a8b785de217bddb177e810fe547c4123bcfa] <==
	{"level":"warn","ts":"2025-11-24T09:22:10.969726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:10.987101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:10.994290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.025786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.035432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.051680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.062354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.079837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.091789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.100510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:11.168551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:18.617439Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.172187ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766384176092380 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:22:18.617580Z","caller":"traceutil/trace.go:172","msg":"trace[76623225] transaction","detail":"{read_only:false; response_revision:292; number_of_response:1; }","duration":"226.546372ms","start":"2025-11-24T09:22:18.391015Z","end":"2025-11-24T09:22:18.617562Z","steps":["trace[76623225] 'process raft request'  (duration: 103.964598ms)","trace[76623225] 'compare'  (duration: 122.092965ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:22:21.112445Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.535593ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T09:22:21.112521Z","caller":"traceutil/trace.go:172","msg":"trace[436615694] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:365; }","duration":"157.621352ms","start":"2025-11-24T09:22:20.954882Z","end":"2025-11-24T09:22:21.112503Z","steps":["trace[436615694] 'agreement among raft nodes before linearized reading'  (duration: 36.533193ms)","trace[436615694] 'range keys from in-memory index tree'  (duration: 120.982925ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:22:21.112646Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.191437ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766384176092561 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-4fcdr.187ae6f445f9871b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-4fcdr.187ae6f445f9871b\" value_size:682 lease:6571766384176092231 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:22:21.112840Z","caller":"traceutil/trace.go:172","msg":"trace[517862459] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"163.605247ms","start":"2025-11-24T09:22:20.949205Z","end":"2025-11-24T09:22:21.112810Z","steps":["trace[517862459] 'process raft request'  (duration: 42.21067ms)","trace[517862459] 'compare'  (duration: 120.896316ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:22:21.113059Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.355999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-374067\" limit:1 ","response":"range_response_count:1 size:5582"}
	{"level":"warn","ts":"2025-11-24T09:22:21.113898Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.154064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4400"}
	{"level":"info","ts":"2025-11-24T09:22:21.116260Z","caller":"traceutil/trace.go:172","msg":"trace[1731630453] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:366; }","duration":"103.511376ms","start":"2025-11-24T09:22:21.012730Z","end":"2025-11-24T09:22:21.116242Z","steps":["trace[1731630453] 'agreement among raft nodes before linearized reading'  (duration: 101.080302ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:22:21.115771Z","caller":"traceutil/trace.go:172","msg":"trace[128131774] range","detail":"{range_begin:/registry/minions/pause-374067; range_end:; response_count:1; response_revision:366; }","duration":"103.057298ms","start":"2025-11-24T09:22:21.012689Z","end":"2025-11-24T09:22:21.115746Z","steps":["trace[128131774] 'agreement among raft nodes before linearized reading'  (duration: 100.270074ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:22:21.278559Z","caller":"traceutil/trace.go:172","msg":"trace[1484584899] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"138.882786ms","start":"2025-11-24T09:22:21.139655Z","end":"2025-11-24T09:22:21.278537Z","steps":["trace[1484584899] 'process raft request'  (duration: 138.352236ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:22:21.537860Z","caller":"traceutil/trace.go:172","msg":"trace[426610543] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"138.905549ms","start":"2025-11-24T09:22:21.398935Z","end":"2025-11-24T09:22:21.537840Z","steps":["trace[426610543] 'process raft request'  (duration: 116.794987ms)","trace[426610543] 'compare'  (duration: 22.020475ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:22:21.717876Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.127539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-4fcdr\" limit:1 ","response":"range_response_count:1 size:5034"}
	{"level":"info","ts":"2025-11-24T09:22:21.717944Z","caller":"traceutil/trace.go:172","msg":"trace[1331020490] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-4fcdr; range_end:; response_count:1; response_revision:371; }","duration":"117.206061ms","start":"2025-11-24T09:22:21.600721Z","end":"2025-11-24T09:22:21.717927Z","steps":["trace[1331020490] 'agreement among raft nodes before linearized reading'  (duration: 69.185083ms)","trace[1331020490] 'range keys from in-memory index tree'  (duration: 47.832628ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:22:48 up  1:05,  0 user,  load average: 4.66, 2.18, 1.34
	Linux pause-374067 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [563770d0bdcaca8df60133fb589e3f8410e97a0a80741909cf32147018fb90a7] <==
	I1124 09:22:21.488493       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:22:21.488816       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 09:22:21.488932       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:22:21.488946       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:22:21.488963       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:22:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:22:21.804213       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:22:21.804246       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:22:21.804259       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:22:21.804461       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:22:22.104750       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:22:22.104798       1 metrics.go:72] Registering metrics
	I1124 09:22:22.104879       1 controller.go:711] "Syncing nftables rules"
	I1124 09:22:31.808431       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:22:31.808496       1 main.go:301] handling current node
	I1124 09:22:41.812070       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:22:41.812105       1 main.go:301] handling current node
	
	
	==> kube-apiserver [52fc240f0457397d561acf64bb27037c43f32a53bfb4416c6ad15817de9b3d61] <==
	I1124 09:22:12.105481       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 09:22:12.105848       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 09:22:12.109848       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:22:12.114418       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:22:12.136609       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:22:12.138041       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 09:22:12.149914       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:22:12.151457       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:22:12.920281       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:22:12.928568       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:22:12.928592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:22:13.568812       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:22:13.606400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:22:13.744204       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:22:13.751253       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 09:22:13.752463       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:22:13.756645       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:22:13.994598       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:22:14.662887       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:22:14.677870       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:22:14.691250       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:22:19.748939       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:22:19.796580       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:22:19.801235       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:22:19.946426       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8914a013c234229f7106c5d4ac049cca62019009b8a979764da4eaca6c996dd1] <==
	I1124 09:22:19.012723       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-374067" podCIDRs=["10.244.0.0/24"]
	I1124 09:22:19.019911       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 09:22:19.030087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:22:19.038371       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:22:19.039605       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:22:19.042866       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 09:22:19.043590       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 09:22:19.043592       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 09:22:19.043745       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:22:19.044017       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:22:19.044095       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 09:22:19.044237       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 09:22:19.044868       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:22:19.044944       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 09:22:19.045042       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:22:19.046259       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 09:22:19.046283       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:22:19.046349       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:22:19.046364       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 09:22:19.046394       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 09:22:19.046441       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 09:22:19.052049       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 09:22:19.058721       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 09:22:19.062170       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:22:33.995497       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [18568b03fc3a07fc46ebb0f16b57658c50f8fa5f1791cee2b802d57da67d4304] <==
	I1124 09:22:21.344347       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:22:21.415578       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:22:21.515986       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:22:21.516029       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 09:22:21.516194       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:22:21.560031       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:22:21.560164       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:22:21.569823       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:22:21.570388       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:22:21.570411       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:22:21.573094       1 config.go:200] "Starting service config controller"
	I1124 09:22:21.573113       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:22:21.574399       1 config.go:309] "Starting node config controller"
	I1124 09:22:21.575374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:22:21.575394       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:22:21.575169       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:22:21.575405       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:22:21.575155       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:22:21.575479       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:22:21.674123       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:22:21.676523       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:22:21.676523       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b02924f1926e4c0fc93eb2c175e5bfccd26af5cf344df926d0776e32bf3d38a6] <==
	E1124 09:22:12.058446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:22:12.058496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:22:12.058626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:22:12.058746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:22:12.058771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:22:12.058933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:22:12.059179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:22:12.059426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:22:12.059463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:22:12.060407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:22:12.060416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:22:12.060559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:22:12.874842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:22:12.893432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 09:22:12.900633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:22:12.952250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:22:12.977284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:22:12.999444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:22:13.000847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:22:13.003380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:22:13.054135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:22:13.139722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:22:13.188400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:22:13.310106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1124 09:22:15.848383       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:22:19 pause-374067 kubelet[1308]: E1124 09:22:19.980880    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 09:22:19 pause-374067 kubelet[1308]: E1124 09:22:19.981083    1308 projected.go:196] Error preparing data for projected volume kube-api-access-c8ls6 for pod kube-system/kindnet-4kv5p: configmap "kube-root-ca.crt" not found
	Nov 24 09:22:19 pause-374067 kubelet[1308]: E1124 09:22:19.981162    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4ea612d-af5f-4f10-96ec-4bbbd27f5176-kube-api-access-c8ls6 podName:a4ea612d-af5f-4f10-96ec-4bbbd27f5176 nodeName:}" failed. No retries permitted until 2025-11-24 09:22:20.48113871 +0000 UTC m=+6.033076998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c8ls6" (UniqueName: "kubernetes.io/projected/a4ea612d-af5f-4f10-96ec-4bbbd27f5176-kube-api-access-c8ls6") pod "kindnet-4kv5p" (UID: "a4ea612d-af5f-4f10-96ec-4bbbd27f5176") : configmap "kube-root-ca.crt" not found
	Nov 24 09:22:21 pause-374067 kubelet[1308]: I1124 09:22:21.801213    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4fcdr" podStartSLOduration=2.801190925 podStartE2EDuration="2.801190925s" podCreationTimestamp="2025-11-24 09:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:22:21.801071672 +0000 UTC m=+7.353009983" watchObservedRunningTime="2025-11-24 09:22:21.801190925 +0000 UTC m=+7.353129234"
	Nov 24 09:22:22 pause-374067 kubelet[1308]: I1124 09:22:22.523724    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4kv5p" podStartSLOduration=3.523699002 podStartE2EDuration="3.523699002s" podCreationTimestamp="2025-11-24 09:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:22:21.877804486 +0000 UTC m=+7.429742792" watchObservedRunningTime="2025-11-24 09:22:22.523699002 +0000 UTC m=+8.075637311"
	Nov 24 09:22:32 pause-374067 kubelet[1308]: I1124 09:22:32.218274    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 09:22:32 pause-374067 kubelet[1308]: I1124 09:22:32.266270    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7363696f-37d3-4e88-9725-81b0d8856d3c-config-volume\") pod \"coredns-66bc5c9577-skkdp\" (UID: \"7363696f-37d3-4e88-9725-81b0d8856d3c\") " pod="kube-system/coredns-66bc5c9577-skkdp"
	Nov 24 09:22:32 pause-374067 kubelet[1308]: I1124 09:22:32.266322    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnqz8\" (UniqueName: \"kubernetes.io/projected/7363696f-37d3-4e88-9725-81b0d8856d3c-kube-api-access-jnqz8\") pod \"coredns-66bc5c9577-skkdp\" (UID: \"7363696f-37d3-4e88-9725-81b0d8856d3c\") " pod="kube-system/coredns-66bc5c9577-skkdp"
	Nov 24 09:22:32 pause-374067 kubelet[1308]: I1124 09:22:32.649431    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-skkdp" podStartSLOduration=12.649407925 podStartE2EDuration="12.649407925s" podCreationTimestamp="2025-11-24 09:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:22:32.646505943 +0000 UTC m=+18.198444286" watchObservedRunningTime="2025-11-24 09:22:32.649407925 +0000 UTC m=+18.201346234"
	Nov 24 09:22:36 pause-374067 kubelet[1308]: W1124 09:22:36.638905    1308 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 09:22:36 pause-374067 kubelet[1308]: E1124 09:22:36.638982    1308 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 24 09:22:36 pause-374067 kubelet[1308]: E1124 09:22:36.639019    1308 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:36 pause-374067 kubelet[1308]: E1124 09:22:36.639031    1308 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:38 pause-374067 kubelet[1308]: W1124 09:22:38.572810    1308 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.572903    1308 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.572981    1308 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.573005    1308 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.573023    1308 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.644369    1308 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.644424    1308 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:38 pause-374067 kubelet[1308]: E1124 09:22:38.644439    1308 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 09:22:43 pause-374067 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:22:43 pause-374067 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:22:43 pause-374067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:22:43 pause-374067 systemd[1]: kubelet.service: Consumed 1.254s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-374067 -n pause-374067
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-374067 -n pause-374067: exit status 2 (403.169706ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-374067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-767267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-767267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.271293ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:28:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-767267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-767267 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-767267 describe deploy/metrics-server -n kube-system: exit status 1 (63.457968ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-767267 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-767267
helpers_test.go:243: (dbg) docker inspect old-k8s-version-767267:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558",
	        "Created": "2025-11-24T09:27:59.477215384Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 312582,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:27:59.516502764Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/hostname",
	        "HostsPath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/hosts",
	        "LogPath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558-json.log",
	        "Name": "/old-k8s-version-767267",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-767267:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-767267",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558",
	                "LowerDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-767267",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-767267/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-767267",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-767267",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-767267",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d93c7a5313fe03167f1cad9a2caecabd3017906bfd6158ef9209f5aa487d9cd1",
	            "SandboxKey": "/var/run/docker/netns/d93c7a5313fe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-767267": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49a891848d14199803dd04f544287d94ca351d74be411134145450566451080b",
	                    "EndpointID": "88d95bedbba8ebd0422ff8a3cdde3b580a676146da7a64b3f36e4ee643cb7620",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ba:4b:2c:76:fb:85",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-767267",
	                        "b2fbca5819e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-767267 -n old-k8s-version-767267
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-767267 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-767267 logs -n 25: (1.143365229s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-949664 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                 │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo ip a s                                                                                                                 │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo ip r s                                                                                                                 │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo iptables-save                                                                                                          │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo iptables -t nat -L -n -v                                                                                               │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl status kubelet --all --full --no-pager                                                                       │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl cat kubelet --no-pager                                                                                       │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo journalctl -xeu kubelet --all --full --no-pager                                                                        │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /etc/kubernetes/kubelet.conf                                                                                       │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /var/lib/kubelet/config.yaml                                                                                       │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl status docker --all --full --no-pager                                                                        │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo systemctl cat docker --no-pager                                                                                        │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /etc/docker/daemon.json                                                                                            │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo docker system info                                                                                                     │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo systemctl status cri-docker --all --full --no-pager                                                                    │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo systemctl cat cri-docker --no-pager                                                                                    │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                               │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                         │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cri-dockerd --version                                                                                                  │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl status containerd --all --full --no-pager                                                                    │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo systemctl cat containerd --no-pager                                                                                    │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /lib/systemd/system/containerd.service                                                                             │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-767267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-767267 │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo cat /etc/containerd/config.toml                                                                                        │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo containerd config dump                                                                                                 │ bridge-949664          │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:28:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:28:00.633676  313419 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:28:00.633912  313419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:28:00.633920  313419 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:00.633925  313419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:28:00.634169  313419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:28:00.634748  313419 out.go:368] Setting JSON to false
	I1124 09:28:00.635974  313419 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4227,"bootTime":1763972254,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:28:00.636024  313419 start.go:143] virtualization: kvm guest
	I1124 09:28:00.637944  313419 out.go:179] * [no-preload-938348] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:28:00.639235  313419 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:28:00.639342  313419 notify.go:221] Checking for updates...
	I1124 09:28:00.641439  313419 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:28:00.642866  313419 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:28:00.644038  313419 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:28:00.645158  313419 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:28:00.646242  313419 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:28:00.647842  313419 config.go:182] Loaded profile config "bridge-949664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:28:00.647971  313419 config.go:182] Loaded profile config "kubernetes-upgrade-967467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:28:00.648116  313419 config.go:182] Loaded profile config "old-k8s-version-767267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 09:28:00.648239  313419 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:28:00.673061  313419 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:28:00.673192  313419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:28:00.733689  313419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 09:28:00.722661768 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:28:00.733811  313419 docker.go:319] overlay module found
	I1124 09:28:00.735581  313419 out.go:179] * Using the docker driver based on user configuration
	I1124 09:28:00.736754  313419 start.go:309] selected driver: docker
	I1124 09:28:00.736772  313419 start.go:927] validating driver "docker" against <nil>
	I1124 09:28:00.736786  313419 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:28:00.737579  313419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:28:00.796312  313419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 09:28:00.786774164 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:28:00.796522  313419 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:28:00.796722  313419 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:28:00.798411  313419 out.go:179] * Using Docker driver with root privileges
	I1124 09:28:00.799822  313419 cni.go:84] Creating CNI manager for ""
	I1124 09:28:00.799877  313419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:28:00.799887  313419 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:28:00.799958  313419 start.go:353] cluster config:
	{Name:no-preload-938348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:28:00.801235  313419 out.go:179] * Starting "no-preload-938348" primary control-plane node in "no-preload-938348" cluster
	I1124 09:28:00.802395  313419 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:28:00.803842  313419 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:28:00.805046  313419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:28:00.805122  313419 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:28:00.805156  313419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/config.json ...
	I1124 09:28:00.805188  313419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/config.json: {Name:mk8b9f744deb1190a836a54fd15dd6417acaf215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:00.805424  313419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:28:00.829211  313419 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:28:00.829231  313419 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:28:00.829250  313419 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:28:00.829288  313419 start.go:360] acquireMachinesLock for no-preload-938348: {Name:mk24e6e043077dc636d4f340fd547870514ace42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:28:00.829405  313419 start.go:364] duration metric: took 99.034µs to acquireMachinesLock for "no-preload-938348"
	I1124 09:28:00.829434  313419 start.go:93] Provisioning new machine with config: &{Name:no-preload-938348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:28:00.829508  313419 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:27:57.610633  255979 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:27:57.610715  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:27:57.610777  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:27:57.643405  255979 cri.go:89] found id: "34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd"
	I1124 09:27:57.643430  255979 cri.go:89] found id: "eb5ed54129935f75c244d8a9f814eaef5839d057f8f38fb306f71e04ea1ed18a"
	I1124 09:27:57.643436  255979 cri.go:89] found id: "45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2"
	I1124 09:27:57.643441  255979 cri.go:89] found id: ""
	I1124 09:27:57.643449  255979 logs.go:282] 3 containers: [34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd eb5ed54129935f75c244d8a9f814eaef5839d057f8f38fb306f71e04ea1ed18a 45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2]
	I1124 09:27:57.643503  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:27:57.649552  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:27:57.653670  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:27:57.657319  255979 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:27:57.657402  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:27:57.686505  255979 cri.go:89] found id: "bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c"
	I1124 09:27:57.686525  255979 cri.go:89] found id: ""
	I1124 09:27:57.686532  255979 logs.go:282] 1 containers: [bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c]
	I1124 09:27:57.686588  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:27:57.690341  255979 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:27:57.690395  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:27:57.716532  255979 cri.go:89] found id: ""
	I1124 09:27:57.716558  255979 logs.go:282] 0 containers: []
	W1124 09:27:57.716566  255979 logs.go:284] No container was found matching "coredns"
	I1124 09:27:57.716572  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:27:57.716624  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:27:57.744395  255979 cri.go:89] found id: "bd27780f2b4fc56d1a494a9bdf31c9543324beb465bda55c30c816706bbcfb56"
	I1124 09:27:57.744417  255979 cri.go:89] found id: "8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe"
	I1124 09:27:57.744423  255979 cri.go:89] found id: "4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484"
	I1124 09:27:57.744428  255979 cri.go:89] found id: ""
	I1124 09:27:57.744437  255979 logs.go:282] 3 containers: [bd27780f2b4fc56d1a494a9bdf31c9543324beb465bda55c30c816706bbcfb56 8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe 4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484]
	I1124 09:27:57.744496  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:27:57.748619  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:27:57.752551  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:27:57.756201  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:27:57.756265  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:27:57.785648  255979 cri.go:89] found id: ""
	I1124 09:27:57.785675  255979 logs.go:282] 0 containers: []
	W1124 09:27:57.785685  255979 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:27:57.785693  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:27:57.785750  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:27:57.812558  255979 cri.go:89] found id: "df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8"
	I1124 09:27:57.812576  255979 cri.go:89] found id: "15217537588ef1b2279049d3dbdc543ad1acfceed7e3c73394581f95e44c48b0"
	I1124 09:27:57.812581  255979 cri.go:89] found id: "233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564"
	I1124 09:27:57.812584  255979 cri.go:89] found id: ""
	I1124 09:27:57.812590  255979 logs.go:282] 3 containers: [df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8 15217537588ef1b2279049d3dbdc543ad1acfceed7e3c73394581f95e44c48b0 233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564]
	I1124 09:27:57.812648  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:27:57.816612  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:27:57.820176  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:27:57.823539  255979 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:27:57.823583  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:27:57.851107  255979 cri.go:89] found id: ""
	I1124 09:27:57.851132  255979 logs.go:282] 0 containers: []
	W1124 09:27:57.851142  255979 logs.go:284] No container was found matching "kindnet"
	I1124 09:27:57.851149  255979 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:27:57.851204  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:27:57.878460  255979 cri.go:89] found id: ""
	I1124 09:27:57.878481  255979 logs.go:282] 0 containers: []
	W1124 09:27:57.878489  255979 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:27:57.878503  255979 logs.go:123] Gathering logs for dmesg ...
	I1124 09:27:57.878514  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:27:57.893380  255979 logs.go:123] Gathering logs for kube-apiserver [eb5ed54129935f75c244d8a9f814eaef5839d057f8f38fb306f71e04ea1ed18a] ...
	I1124 09:27:57.893404  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb5ed54129935f75c244d8a9f814eaef5839d057f8f38fb306f71e04ea1ed18a"
	I1124 09:27:57.922754  255979 logs.go:123] Gathering logs for etcd [bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c] ...
	I1124 09:27:57.922783  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c"
	I1124 09:27:57.954672  255979 logs.go:123] Gathering logs for kube-controller-manager [15217537588ef1b2279049d3dbdc543ad1acfceed7e3c73394581f95e44c48b0] ...
	I1124 09:27:57.954700  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15217537588ef1b2279049d3dbdc543ad1acfceed7e3c73394581f95e44c48b0"
	I1124 09:27:57.980329  255979 logs.go:123] Gathering logs for kube-controller-manager [233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564] ...
	I1124 09:27:57.980380  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564"
	I1124 09:27:58.007725  255979 logs.go:123] Gathering logs for kube-apiserver [34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd] ...
	I1124 09:27:58.007758  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd"
	I1124 09:27:58.037974  255979 logs.go:123] Gathering logs for kube-apiserver [45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2] ...
	I1124 09:27:58.038003  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2"
	I1124 09:27:58.073355  255979 logs.go:123] Gathering logs for kube-controller-manager [df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8] ...
	I1124 09:27:58.073395  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8"
	I1124 09:27:58.099949  255979 logs.go:123] Gathering logs for container status ...
	I1124 09:27:58.099975  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:27:58.130407  255979 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:27:58.130433  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:27:58.196544  255979 logs.go:123] Gathering logs for kubelet ...
	I1124 09:27:58.196574  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:27:58.279261  255979 logs.go:123] Gathering logs for kube-scheduler [bd27780f2b4fc56d1a494a9bdf31c9543324beb465bda55c30c816706bbcfb56] ...
	I1124 09:27:58.279292  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd27780f2b4fc56d1a494a9bdf31c9543324beb465bda55c30c816706bbcfb56"
	W1124 09:27:58.306901  255979 logs.go:138] Found kube-scheduler [bd27780f2b4fc56d1a494a9bdf31c9543324beb465bda55c30c816706bbcfb56] problem: E1124 09:26:30.566115       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1124 09:27:58.306945  255979 logs.go:123] Gathering logs for kube-scheduler [4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484] ...
	I1124 09:27:58.306961  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484"
	I1124 09:27:58.334421  255979 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:27:58.334454  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:27:59.742099  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	W1124 09:28:02.247370  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	I1124 09:27:59.394147  310299 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-767267:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.203813599s)
	I1124 09:27:59.394181  310299 kic.go:203] duration metric: took 5.203965595s to extract preloaded images to volume ...
	W1124 09:27:59.394270  310299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:27:59.394309  310299 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:27:59.394379  310299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:27:59.459410  310299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-767267 --name old-k8s-version-767267 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-767267 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-767267 --network old-k8s-version-767267 --ip 192.168.76.2 --volume old-k8s-version-767267:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:27:59.803406  310299 cli_runner.go:164] Run: docker container inspect old-k8s-version-767267 --format={{.State.Running}}
	I1124 09:27:59.825884  310299 cli_runner.go:164] Run: docker container inspect old-k8s-version-767267 --format={{.State.Status}}
	I1124 09:27:59.846385  310299 cli_runner.go:164] Run: docker exec old-k8s-version-767267 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:27:59.948146  310299 oci.go:144] the created container "old-k8s-version-767267" has a running status.
	I1124 09:27:59.948173  310299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/old-k8s-version-767267/id_rsa...
	I1124 09:27:59.994034  310299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-5690/.minikube/machines/old-k8s-version-767267/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:28:00.183133  310299 cli_runner.go:164] Run: docker container inspect old-k8s-version-767267 --format={{.State.Status}}
	I1124 09:28:00.208587  310299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:28:00.208611  310299 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-767267 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:28:00.258630  310299 cli_runner.go:164] Run: docker container inspect old-k8s-version-767267 --format={{.State.Status}}
	I1124 09:28:00.280088  310299 machine.go:94] provisionDockerMachine start ...
	I1124 09:28:00.280208  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:00.302572  310299 main.go:143] libmachine: Using SSH client type: native
	I1124 09:28:00.302836  310299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 09:28:00.302849  310299 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:28:00.452248  310299 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-767267
	
	I1124 09:28:00.452276  310299 ubuntu.go:182] provisioning hostname "old-k8s-version-767267"
	I1124 09:28:00.452355  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:00.473810  310299 main.go:143] libmachine: Using SSH client type: native
	I1124 09:28:00.474139  310299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 09:28:00.474165  310299 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-767267 && echo "old-k8s-version-767267" | sudo tee /etc/hostname
	I1124 09:28:00.636406  310299 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-767267
	
	I1124 09:28:00.636474  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:00.656810  310299 main.go:143] libmachine: Using SSH client type: native
	I1124 09:28:00.657039  310299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 09:28:00.657065  310299 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-767267' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-767267/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-767267' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:28:00.809263  310299 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:28:00.809292  310299 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 09:28:00.809324  310299 ubuntu.go:190] setting up certificates
	I1124 09:28:00.809380  310299 provision.go:84] configureAuth start
	I1124 09:28:00.809436  310299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-767267
	I1124 09:28:00.828668  310299 provision.go:143] copyHostCerts
	I1124 09:28:00.828729  310299 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem, removing ...
	I1124 09:28:00.828742  310299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem
	I1124 09:28:00.828829  310299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 09:28:00.828946  310299 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem, removing ...
	I1124 09:28:00.828959  310299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem
	I1124 09:28:00.829000  310299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 09:28:00.829139  310299 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem, removing ...
	I1124 09:28:00.829152  310299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem
	I1124 09:28:00.829190  310299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 09:28:00.829258  310299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-767267 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-767267]
	I1124 09:28:00.901558  310299 provision.go:177] copyRemoteCerts
	I1124 09:28:00.901607  310299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:28:00.901636  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:00.921626  310299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/old-k8s-version-767267/id_rsa Username:docker}
	I1124 09:28:01.028222  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:28:01.047729  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 09:28:01.066154  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 09:28:01.083596  310299 provision.go:87] duration metric: took 274.198763ms to configureAuth
	I1124 09:28:01.083622  310299 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:28:01.083758  310299 config.go:182] Loaded profile config "old-k8s-version-767267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 09:28:01.083848  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:01.104248  310299 main.go:143] libmachine: Using SSH client type: native
	I1124 09:28:01.104524  310299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 09:28:01.104555  310299 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:28:01.400883  310299 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:28:01.400908  310299 machine.go:97] duration metric: took 1.120788524s to provisionDockerMachine
	I1124 09:28:01.400920  310299 client.go:176] duration metric: took 7.771607444s to LocalClient.Create
	I1124 09:28:01.400953  310299 start.go:167] duration metric: took 7.771676285s to libmachine.API.Create "old-k8s-version-767267"
	I1124 09:28:01.400966  310299 start.go:293] postStartSetup for "old-k8s-version-767267" (driver="docker")
	I1124 09:28:01.400980  310299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:28:01.401039  310299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:28:01.401084  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:01.424158  310299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/old-k8s-version-767267/id_rsa Username:docker}
	I1124 09:28:01.531559  310299 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:28:01.535164  310299 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:28:01.535200  310299 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:28:01.535210  310299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:28:01.535261  310299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:28:01.535400  310299 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:28:01.535516  310299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:28:01.543432  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:28:01.565140  310299 start.go:296] duration metric: took 164.159017ms for postStartSetup
	I1124 09:28:01.565550  310299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-767267
	I1124 09:28:01.585259  310299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/config.json ...
	I1124 09:28:01.585552  310299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:28:01.585596  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:01.609718  310299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/old-k8s-version-767267/id_rsa Username:docker}
	I1124 09:28:01.716739  310299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:28:01.721416  310299 start.go:128] duration metric: took 8.094398376s to createHost
	I1124 09:28:01.721442  310299 start.go:83] releasing machines lock for "old-k8s-version-767267", held for 8.094540711s
	I1124 09:28:01.721509  310299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-767267
	I1124 09:28:01.741896  310299 ssh_runner.go:195] Run: cat /version.json
	I1124 09:28:01.741922  310299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:28:01.741938  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:01.741986  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:01.762654  310299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/old-k8s-version-767267/id_rsa Username:docker}
	I1124 09:28:01.762903  310299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/old-k8s-version-767267/id_rsa Username:docker}
	I1124 09:28:01.932967  310299 ssh_runner.go:195] Run: systemctl --version
	I1124 09:28:01.939803  310299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:28:01.982480  310299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:28:01.988701  310299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:28:01.988778  310299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:28:02.023319  310299 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:28:02.023372  310299 start.go:496] detecting cgroup driver to use...
	I1124 09:28:02.023412  310299 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:28:02.023460  310299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:28:02.042376  310299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:28:02.060672  310299 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:28:02.060733  310299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:28:02.085908  310299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:28:02.107004  310299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:28:02.206878  310299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:28:02.303359  310299 docker.go:234] disabling docker service ...
	I1124 09:28:02.303422  310299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:28:02.321855  310299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:28:02.334696  310299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:28:02.420105  310299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:28:02.501128  310299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:28:02.513549  310299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:28:02.528372  310299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 09:28:02.528431  310299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:02.538697  310299 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:28:02.538746  310299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:02.547571  310299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:02.556239  310299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:02.564985  310299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:28:02.573003  310299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:02.581831  310299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:02.595122  310299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:02.603846  310299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:28:02.611156  310299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:28:02.618474  310299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:28:02.700134  310299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:28:02.834765  310299 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:28:02.834835  310299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:28:02.838840  310299 start.go:564] Will wait 60s for crictl version
	I1124 09:28:02.838898  310299 ssh_runner.go:195] Run: which crictl
	I1124 09:28:02.842360  310299 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:28:02.866921  310299 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:28:02.867007  310299 ssh_runner.go:195] Run: crio --version
	I1124 09:28:02.894784  310299 ssh_runner.go:195] Run: crio --version
	I1124 09:28:02.927467  310299 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 09:28:02.928633  310299 cli_runner.go:164] Run: docker network inspect old-k8s-version-767267 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:28:02.946218  310299 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 09:28:02.950353  310299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:28:02.960461  310299 kubeadm.go:884] updating cluster {Name:old-k8s-version-767267 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-767267 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:28:02.960568  310299 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 09:28:02.960612  310299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:28:02.992650  310299 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:28:02.992669  310299 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:28:02.992709  310299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:28:03.017907  310299 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:28:03.017929  310299 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:28:03.017938  310299 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1124 09:28:03.018039  310299 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-767267 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-767267 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:28:03.018124  310299 ssh_runner.go:195] Run: crio config
	I1124 09:28:03.062126  310299 cni.go:84] Creating CNI manager for ""
	I1124 09:28:03.062154  310299 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:28:03.062176  310299 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:28:03.062210  310299 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-767267 NodeName:old-k8s-version-767267 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:28:03.062444  310299 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-767267"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:28:03.062519  310299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 09:28:03.071013  310299 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:28:03.071068  310299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:28:03.078853  310299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1124 09:28:03.091567  310299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:28:03.106807  310299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1124 09:28:03.119634  310299 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:28:03.123374  310299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:28:03.133160  310299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:28:03.214694  310299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:28:03.238921  310299 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267 for IP: 192.168.76.2
	I1124 09:28:03.238943  310299 certs.go:195] generating shared ca certs ...
	I1124 09:28:03.238962  310299 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:03.239114  310299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:28:03.239166  310299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:28:03.239180  310299 certs.go:257] generating profile certs ...
	I1124 09:28:03.239244  310299 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/client.key
	I1124 09:28:03.239257  310299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/client.crt with IP's: []
	I1124 09:28:03.325506  310299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/client.crt ...
	I1124 09:28:03.325531  310299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/client.crt: {Name:mk2aa31f0133176e4908fc4c47e4ae8ca465a30e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:03.325717  310299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/client.key ...
	I1124 09:28:03.325740  310299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/client.key: {Name:mk28739c568a7c808844c1873a9054f23477066b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:03.325853  310299 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.key.b4c9fae7
	I1124 09:28:03.325870  310299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.crt.b4c9fae7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 09:28:03.363703  310299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.crt.b4c9fae7 ...
	I1124 09:28:03.363727  310299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.crt.b4c9fae7: {Name:mk46315e2639a3d260eb7edabb67f90d8b914e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:03.363898  310299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.key.b4c9fae7 ...
	I1124 09:28:03.363915  310299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.key.b4c9fae7: {Name:mk7caa57088f528e212c27e90108e01f51e4c650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:03.364014  310299 certs.go:382] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.crt.b4c9fae7 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.crt
	I1124 09:28:03.364092  310299 certs.go:386] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.key.b4c9fae7 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.key
	I1124 09:28:03.364155  310299 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/proxy-client.key
	I1124 09:28:03.364170  310299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/proxy-client.crt with IP's: []
	I1124 09:28:03.424367  310299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/proxy-client.crt ...
	I1124 09:28:03.424392  310299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/proxy-client.crt: {Name:mk1d845747948db0efa05f89d1626c41eb525fd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:03.424572  310299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/proxy-client.key ...
	I1124 09:28:03.424612  310299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/proxy-client.key: {Name:mkd225107e5d61d21b07561f07d03a2be746e89a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:03.424824  310299 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:28:03.424867  310299 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:28:03.424877  310299 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:28:03.424901  310299 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:28:03.424927  310299 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:28:03.424951  310299 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:28:03.424992  310299 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:28:03.425547  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:28:03.444327  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:28:03.461800  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:28:03.478774  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:28:03.495707  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 09:28:03.512937  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:28:03.530189  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:28:03.547264  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/old-k8s-version-767267/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:28:03.564830  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:28:03.583711  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:28:03.601275  310299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:28:03.618824  310299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:28:03.631122  310299 ssh_runner.go:195] Run: openssl version
	I1124 09:28:03.637015  310299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:28:03.645350  310299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:28:03.649383  310299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:28:03.649434  310299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:28:03.684199  310299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:28:03.693014  310299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:28:03.701379  310299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:28:03.705256  310299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:28:03.705299  310299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:28:03.740705  310299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:28:03.749735  310299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:28:03.758250  310299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:28:03.762247  310299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:28:03.762297  310299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:28:03.797098  310299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:28:03.806136  310299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:28:03.809781  310299 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:28:03.809831  310299 kubeadm.go:401] StartCluster: {Name:old-k8s-version-767267 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-767267 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:28:03.809895  310299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:28:03.809954  310299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:28:03.836390  310299 cri.go:89] found id: ""
	I1124 09:28:03.836454  310299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:28:03.844658  310299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:28:03.852775  310299 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:28:03.852820  310299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:28:03.860830  310299 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:28:03.860848  310299 kubeadm.go:158] found existing configuration files:
	
	I1124 09:28:03.860886  310299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:28:03.868836  310299 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:28:03.868883  310299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:28:03.876355  310299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:28:03.883943  310299 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:28:03.884000  310299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:28:03.891242  310299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:28:03.898826  310299 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:28:03.898881  310299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:28:03.906111  310299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:28:03.913524  310299 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:28:03.913567  310299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:28:03.920637  310299 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:28:03.972659  310299 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 09:28:03.972770  310299 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:28:04.012774  310299 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:28:04.012852  310299 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:28:04.012885  310299 kubeadm.go:319] OS: Linux
	I1124 09:28:04.012930  310299 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:28:04.012983  310299 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:28:04.013047  310299 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:28:04.013112  310299 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:28:04.013205  310299 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:28:04.013290  310299 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:28:04.013400  310299 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:28:04.013473  310299 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:28:04.085061  310299 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:28:04.085206  310299 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:28:04.085355  310299 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 09:28:04.237306  310299 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:28:00.831522  313419 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:28:00.831741  313419 start.go:159] libmachine.API.Create for "no-preload-938348" (driver="docker")
	I1124 09:28:00.831775  313419 client.go:173] LocalClient.Create starting
	I1124 09:28:00.831874  313419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem
	I1124 09:28:00.831911  313419 main.go:143] libmachine: Decoding PEM data...
	I1124 09:28:00.831931  313419 main.go:143] libmachine: Parsing certificate...
	I1124 09:28:00.831989  313419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem
	I1124 09:28:00.832014  313419 main.go:143] libmachine: Decoding PEM data...
	I1124 09:28:00.832040  313419 main.go:143] libmachine: Parsing certificate...
	I1124 09:28:00.832485  313419 cli_runner.go:164] Run: docker network inspect no-preload-938348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:28:00.851012  313419 cli_runner.go:211] docker network inspect no-preload-938348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:28:00.851135  313419 network_create.go:284] running [docker network inspect no-preload-938348] to gather additional debugging logs...
	I1124 09:28:00.851166  313419 cli_runner.go:164] Run: docker network inspect no-preload-938348
	W1124 09:28:00.871875  313419 cli_runner.go:211] docker network inspect no-preload-938348 returned with exit code 1
	I1124 09:28:00.871913  313419 network_create.go:287] error running [docker network inspect no-preload-938348]: docker network inspect no-preload-938348: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-938348 not found
	I1124 09:28:00.871929  313419 network_create.go:289] output of [docker network inspect no-preload-938348]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-938348 not found
	
	** /stderr **
	I1124 09:28:00.872084  313419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:28:00.890980  313419 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2543a3a5b30f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:09:61:f4:32:5e} reservation:<nil>}
	I1124 09:28:00.891776  313419 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c977c796f084 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:34:cc:6d:f9:2b} reservation:<nil>}
	I1124 09:28:00.892597  313419 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2994a163bb80 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:ca:61:f0:c2:2e} reservation:<nil>}
	I1124 09:28:00.893293  313419 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49a891848d14 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:26:80:16:6d:29} reservation:<nil>}
	I1124 09:28:00.893886  313419 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-af4adc144678 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:6a:0f:14:de:ce:99} reservation:<nil>}
	I1124 09:28:00.894734  313419 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fda550}
	I1124 09:28:00.894759  313419 network_create.go:124] attempt to create docker network no-preload-938348 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 09:28:00.894816  313419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-938348 no-preload-938348
	I1124 09:28:00.944937  313419 network_create.go:108] docker network no-preload-938348 192.168.94.0/24 created
	I1124 09:28:00.944966  313419 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-938348" container
	I1124 09:28:00.945014  313419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:28:00.959034  313419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:28:00.964102  313419 cli_runner.go:164] Run: docker volume create no-preload-938348 --label name.minikube.sigs.k8s.io=no-preload-938348 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:28:00.983859  313419 oci.go:103] Successfully created a docker volume no-preload-938348
	I1124 09:28:00.983998  313419 cli_runner.go:164] Run: docker run --rm --name no-preload-938348-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-938348 --entrypoint /usr/bin/test -v no-preload-938348:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:28:01.141480  313419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:28:01.301604  313419 cache.go:107] acquiring lock: {Name:mk50e8a993397cfd35eb04bbf3ec3f2f16922e03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:28:01.301593  313419 cache.go:107] acquiring lock: {Name:mkbf0dee95f0ab47974350aecf97d10e64a67897 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:28:01.301593  313419 cache.go:107] acquiring lock: {Name:mk22cdf247cbd1eba82607ef17480dc2601681cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:28:01.301666  313419 cache.go:107] acquiring lock: {Name:mk690ae61adbe621ac8f3906853ffca5c6beb812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:28:01.301638  313419 cache.go:107] acquiring lock: {Name:mk44ea28b5ef083e518e10f8b09fe20e117fa612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:28:01.301668  313419 cache.go:107] acquiring lock: {Name:mk7db92c93cf19a2f7751497e327ce09d843bbd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:28:01.301702  313419 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:28:01.301709  313419 cache.go:107] acquiring lock: {Name:mk02678e83bd0bc783689569fa5806aa92d36dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:28:01.301605  313419 cache.go:107] acquiring lock: {Name:mk4b39f728589920114b6f2c68f5093e514fadca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:28:01.301743  313419 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 131.065µs
	I1124 09:28:01.301757  313419 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:28:01.301763  313419 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:28:01.301765  313419 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:28:01.301748  313419 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:28:01.301775  313419 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:28:01.301773  313419 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 138.383µs
	I1124 09:28:01.301781  313419 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 216.454µs
	I1124 09:28:01.301777  313419 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 113.15µs
	I1124 09:28:01.301795  313419 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:28:01.301738  313419 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:28:01.301808  313419 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 233.035µs
	I1124 09:28:01.301816  313419 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:28:01.301786  313419 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:28:01.301738  313419 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:28:01.301835  313419 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 174.456µs
	I1124 09:28:01.301842  313419 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:28:01.301784  313419 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 77.367µs
	I1124 09:28:01.301850  313419 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:28:01.301789  313419 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:28:01.301860  313419 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 261.344µs
	I1124 09:28:01.301873  313419 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:28:01.301799  313419 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:28:01.301898  313419 cache.go:87] Successfully saved all images to host disk.
	I1124 09:28:01.387446  313419 oci.go:107] Successfully prepared a docker volume no-preload-938348
	I1124 09:28:01.387526  313419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1124 09:28:01.387604  313419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:28:01.387630  313419 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:28:01.387662  313419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:28:01.447247  313419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-938348 --name no-preload-938348 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-938348 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-938348 --network no-preload-938348 --ip 192.168.94.2 --volume no-preload-938348:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:28:01.765285  313419 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Running}}
	I1124 09:28:01.785421  313419 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:28:01.805745  313419 cli_runner.go:164] Run: docker exec no-preload-938348 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:28:01.860399  313419 oci.go:144] the created container "no-preload-938348" has a running status.
	I1124 09:28:01.860431  313419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa...
	I1124 09:28:01.935205  313419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:28:01.962284  313419 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:28:01.983392  313419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:28:01.983417  313419 kic_runner.go:114] Args: [docker exec --privileged no-preload-938348 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:28:02.032023  313419 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:28:02.057619  313419 machine.go:94] provisionDockerMachine start ...
	I1124 09:28:02.057711  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:02.082302  313419 main.go:143] libmachine: Using SSH client type: native
	I1124 09:28:02.082658  313419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 09:28:02.082670  313419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:28:02.083381  313419 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 09:28:05.227567  313419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-938348
	
	I1124 09:28:05.227595  313419 ubuntu.go:182] provisioning hostname "no-preload-938348"
	I1124 09:28:05.227672  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:05.246976  313419 main.go:143] libmachine: Using SSH client type: native
	I1124 09:28:05.247225  313419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 09:28:05.247245  313419 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-938348 && echo "no-preload-938348" | sudo tee /etc/hostname
	I1124 09:28:05.401954  313419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-938348
	
	I1124 09:28:05.402026  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:05.421625  313419 main.go:143] libmachine: Using SSH client type: native
	I1124 09:28:05.421839  313419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 09:28:05.421855  313419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-938348' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-938348/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-938348' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:28:05.567918  313419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:28:05.567960  313419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 09:28:05.568030  313419 ubuntu.go:190] setting up certificates
	I1124 09:28:05.568052  313419 provision.go:84] configureAuth start
	I1124 09:28:05.568131  313419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-938348
	I1124 09:28:05.587113  313419 provision.go:143] copyHostCerts
	I1124 09:28:05.587174  313419 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem, removing ...
	I1124 09:28:05.587186  313419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem
	I1124 09:28:05.587250  313419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 09:28:05.587383  313419 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem, removing ...
	I1124 09:28:05.587394  313419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem
	I1124 09:28:05.587424  313419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 09:28:05.587488  313419 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem, removing ...
	I1124 09:28:05.587497  313419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem
	I1124 09:28:05.587520  313419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 09:28:05.587572  313419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.no-preload-938348 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-938348]
	I1124 09:28:05.630390  313419 provision.go:177] copyRemoteCerts
	I1124 09:28:05.630446  313419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:28:05.630480  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:04.240220  310299 out.go:252]   - Generating certificates and keys ...
	I1124 09:28:04.240314  310299 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:28:04.240439  310299 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:28:04.340052  310299 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:28:04.553669  310299 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:28:04.708481  310299 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:28:04.810103  310299 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:28:04.895138  310299 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:28:04.895300  310299 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-767267] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 09:28:04.970118  310299 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:28:04.970304  310299 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-767267] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 09:28:05.113562  310299 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:28:05.246348  310299 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:28:05.379241  310299 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:28:05.379428  310299 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:28:05.561203  310299 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:28:05.880102  310299 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:28:05.984691  310299 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:28:06.101514  310299 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:28:06.102085  310299 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:28:06.105882  310299 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 09:28:04.741429  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	W1124 09:28:07.242034  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	I1124 09:28:05.648833  313419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:28:05.751024  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 09:28:05.770392  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:28:05.788518  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:28:05.806692  313419 provision.go:87] duration metric: took 238.623257ms to configureAuth
	I1124 09:28:05.806716  313419 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:28:05.806856  313419 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:28:05.806948  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:05.825803  313419 main.go:143] libmachine: Using SSH client type: native
	I1124 09:28:05.826068  313419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 09:28:05.826092  313419 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:28:06.118482  313419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:28:06.118507  313419 machine.go:97] duration metric: took 4.060864862s to provisionDockerMachine
	I1124 09:28:06.118519  313419 client.go:176] duration metric: took 5.286736738s to LocalClient.Create
	I1124 09:28:06.118540  313419 start.go:167] duration metric: took 5.286798847s to libmachine.API.Create "no-preload-938348"
	I1124 09:28:06.118549  313419 start.go:293] postStartSetup for "no-preload-938348" (driver="docker")
	I1124 09:28:06.118567  313419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:28:06.118633  313419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:28:06.118676  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:06.137774  313419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:28:06.249531  313419 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:28:06.253626  313419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:28:06.253661  313419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:28:06.253676  313419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:28:06.253735  313419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:28:06.253831  313419 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:28:06.253947  313419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:28:06.262425  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:28:06.284098  313419 start.go:296] duration metric: took 165.530075ms for postStartSetup
	I1124 09:28:06.284463  313419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-938348
	I1124 09:28:06.302504  313419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/config.json ...
	I1124 09:28:06.302770  313419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:28:06.302819  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:06.320265  313419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:28:06.419713  313419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:28:06.424440  313419 start.go:128] duration metric: took 5.594919033s to createHost
	I1124 09:28:06.424459  313419 start.go:83] releasing machines lock for "no-preload-938348", held for 5.59503984s
	I1124 09:28:06.424526  313419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-938348
	I1124 09:28:06.443382  313419 ssh_runner.go:195] Run: cat /version.json
	I1124 09:28:06.443444  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:06.443388  313419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:28:06.443558  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:06.465142  313419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:28:06.465371  313419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:28:06.621504  313419 ssh_runner.go:195] Run: systemctl --version
	I1124 09:28:06.628485  313419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:28:06.661275  313419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:28:06.666042  313419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:28:06.666094  313419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:28:06.692367  313419 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:28:06.692392  313419 start.go:496] detecting cgroup driver to use...
	I1124 09:28:06.692426  313419 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:28:06.692463  313419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:28:06.708652  313419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:28:06.720688  313419 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:28:06.720738  313419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:28:06.738307  313419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:28:06.755803  313419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:28:06.840021  313419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:28:06.936740  313419 docker.go:234] disabling docker service ...
	I1124 09:28:06.936812  313419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:28:06.957547  313419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:28:06.971054  313419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:28:07.094188  313419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:28:07.188937  313419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:28:07.201723  313419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:28:07.218969  313419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:28:07.374467  313419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:28:07.374535  313419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:07.386136  313419 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:28:07.386198  313419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:07.395657  313419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:07.405527  313419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:07.414888  313419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:28:07.423534  313419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:07.432739  313419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:07.446601  313419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:28:07.455807  313419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:28:07.463386  313419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:28:07.471149  313419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:28:07.563804  313419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:28:07.718303  313419 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:28:07.718382  313419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:28:07.722596  313419 start.go:564] Will wait 60s for crictl version
	I1124 09:28:07.722656  313419 ssh_runner.go:195] Run: which crictl
	I1124 09:28:07.726418  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:28:07.754109  313419 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:28:07.754214  313419 ssh_runner.go:195] Run: crio --version
	I1124 09:28:07.785201  313419 ssh_runner.go:195] Run: crio --version
	I1124 09:28:07.823200  313419 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:28:06.108316  310299 out.go:252]   - Booting up control plane ...
	I1124 09:28:06.108482  310299 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:28:06.108603  310299 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:28:06.108710  310299 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:28:06.123958  310299 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:28:06.125961  310299 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:28:06.126051  310299 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:28:06.226397  310299 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 09:28:07.824348  313419 cli_runner.go:164] Run: docker network inspect no-preload-938348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:28:07.843287  313419 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 09:28:07.847994  313419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:28:07.858656  313419 kubeadm.go:884] updating cluster {Name:no-preload-938348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:28:07.858828  313419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:28:08.015260  313419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:28:08.178467  313419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:28:08.390711  313419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:28:08.390772  313419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:28:08.417469  313419 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 09:28:08.417489  313419 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 09:28:08.417546  313419 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:28:08.417548  313419 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:28:08.417581  313419 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:28:08.417601  313419 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:28:08.417617  313419 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:28:08.417566  313419 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:28:08.417604  313419 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 09:28:08.417594  313419 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:28:08.418990  313419 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:28:08.419000  313419 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:28:08.419024  313419 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:28:08.418990  313419 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:28:08.418992  313419 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:28:08.418991  313419 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:28:08.419069  313419 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 09:28:08.419134  313419 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:28:08.582653  313419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:28:08.586913  313419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:28:08.590108  313419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:28:08.594519  313419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.24-0
	I1124 09:28:08.606750  313419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:28:08.612193  313419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1124 09:28:08.642109  313419 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1124 09:28:08.642155  313419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:28:08.642203  313419 ssh_runner.go:195] Run: which crictl
	I1124 09:28:08.642304  313419 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1124 09:28:08.642358  313419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:28:08.642397  313419 ssh_runner.go:195] Run: which crictl
	I1124 09:28:08.642482  313419 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1124 09:28:08.642509  313419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:28:08.642544  313419 ssh_runner.go:195] Run: which crictl
	I1124 09:28:08.652582  313419 cache_images.go:118] "registry.k8s.io/etcd:3.5.24-0" needs transfer: "registry.k8s.io/etcd:3.5.24-0" does not exist at hash "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d" in container runtime
	I1124 09:28:08.652619  313419 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:28:08.652660  313419 ssh_runner.go:195] Run: which crictl
	I1124 09:28:08.659608  313419 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1124 09:28:08.659644  313419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:28:08.659686  313419 ssh_runner.go:195] Run: which crictl
	I1124 09:28:08.667230  313419 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 09:28:08.667265  313419 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 09:28:08.667299  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:28:08.667303  313419 ssh_runner.go:195] Run: which crictl
	I1124 09:28:08.667374  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:28:08.667441  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:28:08.667486  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:28:08.667499  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:28:08.701703  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:28:08.702177  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:28:08.702223  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:28:08.702177  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:28:08.702294  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:28:08.702227  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:28:08.719684  313419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:28:08.752185  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:28:08.758074  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:28:08.758414  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:28:08.762678  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:28:08.764605  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:28:08.764699  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:28:08.802753  313419 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1124 09:28:08.802802  313419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:28:08.802850  313419 ssh_runner.go:195] Run: which crictl
	I1124 09:28:08.802949  313419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 09:28:08.803030  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:28:08.808468  313419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 09:28:08.808625  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:28:08.811685  313419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0
	I1124 09:28:08.811770  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:28:08.811802  313419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 09:28:08.811901  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:28:08.813838  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:28:08.813977  313419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 09:28:08.814007  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:28:08.814051  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:28:08.814083  313419 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1124 09:28:08.814098  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1124 09:28:08.814132  313419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1124 09:28:08.814153  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1124 09:28:08.819633  313419 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.24-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.24-0': No such file or directory
	I1124 09:28:08.819663  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 --> /var/lib/minikube/images/etcd_3.5.24-0 (23728640 bytes)
	I1124 09:28:08.821933  313419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1124 09:28:08.821965  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1124 09:28:08.872182  313419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 09:28:08.872312  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 09:28:08.872363  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:28:08.872429  313419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1124 09:28:08.872481  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1124 09:28:09.017839  313419 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 09:28:09.017883  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 09:28:09.018000  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:28:09.103027  313419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 09:28:09.103118  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:28:09.107944  313419 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 09:28:09.108015  313419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1124 09:28:09.130962  313419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1124 09:28:09.130999  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1124 09:28:09.436711  313419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:28:09.521442  313419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 09:28:09.521480  313419 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:28:09.521514  313419 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 09:28:09.521530  313419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:28:09.521555  313419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:28:09.521602  313419 ssh_runner.go:195] Run: which crictl
	I1124 09:28:10.729077  310299 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502799 seconds
	I1124 09:28:10.729252  310299 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:28:10.742196  310299 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:28:11.263095  310299 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:28:11.263437  310299 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-767267 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:28:11.776762  310299 kubeadm.go:319] [bootstrap-token] Using token: ei86sv.0oniwykbsmbos5tc
	I1124 09:28:08.756995  255979 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.422517994s)
	W1124 09:28:08.757040  255979 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:58228->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:58228->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1124 09:28:08.757050  255979 logs.go:123] Gathering logs for kube-scheduler [8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe] ...
	I1124 09:28:08.757064  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe"
	I1124 09:28:08.864273  255979 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:08.864370  255979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1124 09:28:08.864464  255979 out.go:285] X Problems detected in kube-scheduler [bd27780f2b4fc56d1a494a9bdf31c9543324beb465bda55c30c816706bbcfb56]:
	W1124 09:28:08.864496  255979 out.go:285]   E1124 09:26:30.566115       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1124 09:28:08.864772  255979 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:08.864792  255979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:28:11.778625  310299 out.go:252]   - Configuring RBAC rules ...
	I1124 09:28:11.778783  310299 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:28:11.783289  310299 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:28:11.793427  310299 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:28:11.796295  310299 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:28:11.799504  310299 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:28:11.802483  310299 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:28:11.814708  310299 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:28:12.002731  310299 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:28:12.187316  310299 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:28:12.188149  310299 kubeadm.go:319] 
	I1124 09:28:12.188235  310299 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:28:12.188252  310299 kubeadm.go:319] 
	I1124 09:28:12.188404  310299 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:28:12.188414  310299 kubeadm.go:319] 
	I1124 09:28:12.188439  310299 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:28:12.188507  310299 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:28:12.188577  310299 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:28:12.188590  310299 kubeadm.go:319] 
	I1124 09:28:12.188661  310299 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:28:12.188672  310299 kubeadm.go:319] 
	I1124 09:28:12.188742  310299 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:28:12.188750  310299 kubeadm.go:319] 
	I1124 09:28:12.188821  310299 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:28:12.188931  310299 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:28:12.189043  310299 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:28:12.189055  310299 kubeadm.go:319] 
	I1124 09:28:12.189170  310299 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:28:12.189280  310299 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:28:12.189295  310299 kubeadm.go:319] 
	I1124 09:28:12.189446  310299 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ei86sv.0oniwykbsmbos5tc \
	I1124 09:28:12.189598  310299 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 \
	I1124 09:28:12.189637  310299 kubeadm.go:319] 	--control-plane 
	I1124 09:28:12.189645  310299 kubeadm.go:319] 
	I1124 09:28:12.189766  310299 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:28:12.189778  310299 kubeadm.go:319] 
	I1124 09:28:12.189924  310299 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ei86sv.0oniwykbsmbos5tc \
	I1124 09:28:12.190071  310299 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 
	I1124 09:28:12.192192  310299 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:28:12.192385  310299 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:28:12.192421  310299 cni.go:84] Creating CNI manager for ""
	I1124 09:28:12.192434  310299 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:28:12.194068  310299 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 09:28:09.242818  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	W1124 09:28:11.741151  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	I1124 09:28:12.195477  310299 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:28:12.199799  310299 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 09:28:12.199815  310299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:28:12.214467  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:28:12.896452  310299 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:28:12.896517  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:12.896545  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-767267 minikube.k8s.io/updated_at=2025_11_24T09_28_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=old-k8s-version-767267 minikube.k8s.io/primary=true
	I1124 09:28:12.908804  310299 ops.go:34] apiserver oom_adj: -16
	I1124 09:28:13.006553  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:10.963619  313419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.442068394s)
	I1124 09:28:10.963648  313419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 09:28:10.963666  313419 ssh_runner.go:235] Completed: which crictl: (1.442041075s)
	I1124 09:28:10.963678  313419 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:28:10.963722  313419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:28:10.963722  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:28:12.374817  313419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.24-0: (1.41106528s)
	I1124 09:28:12.374844  313419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 from cache
	I1124 09:28:12.374885  313419 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:28:12.374891  313419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.411081801s)
	I1124 09:28:12.374949  313419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:28:12.374957  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:28:13.509106  313419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.134138058s)
	I1124 09:28:13.509141  313419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 09:28:13.509162  313419 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:28:13.509179  313419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.134199841s)
	I1124 09:28:13.509201  313419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:28:13.509256  313419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:28:15.217614  313419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.7083944s)
	I1124 09:28:15.217637  313419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 09:28:15.217671  313419 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:28:15.217685  313419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.708409217s)
	I1124 09:28:15.217732  313419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:28:15.217731  313419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 09:28:15.217895  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1124 09:28:14.241424  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	W1124 09:28:16.241930  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	I1124 09:28:13.506672  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:14.006719  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:14.506717  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:15.006847  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:15.507679  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:16.007549  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:16.507391  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:17.007635  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:17.506904  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:18.007519  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:16.356612  313419 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.138855238s)
	I1124 09:28:16.356637  313419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 09:28:16.356667  313419 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:28:16.356679  313419 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.138762876s)
	I1124 09:28:16.356720  313419 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 09:28:16.356737  313419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:28:16.356744  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 09:28:17.209815  313419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 09:28:17.209858  313419 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:28:17.209906  313419 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:28:17.758594  313419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 09:28:17.758646  313419 cache_images.go:125] Successfully loaded all cached images
	I1124 09:28:17.758654  313419 cache_images.go:94] duration metric: took 9.341152436s to LoadCachedImages
	I1124 09:28:17.758665  313419 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1124 09:28:17.758753  313419 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-938348 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:28:17.758814  313419 ssh_runner.go:195] Run: crio config
	I1124 09:28:17.804441  313419 cni.go:84] Creating CNI manager for ""
	I1124 09:28:17.804464  313419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:28:17.804484  313419 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:28:17.804511  313419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-938348 NodeName:no-preload-938348 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:28:17.804650  313419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-938348"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:28:17.804726  313419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:28:17.814039  313419 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1124 09:28:17.814087  313419 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:28:17.822366  313419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1124 09:28:17.822396  313419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:28:17.822396  313419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1124 09:28:17.822425  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1124 09:28:17.822455  313419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:28:17.822460  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1124 09:28:17.835443  313419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1124 09:28:17.835476  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1124 09:28:17.835501  313419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1124 09:28:17.835524  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1124 09:28:17.835529  313419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1124 09:28:17.844446  313419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1124 09:28:17.844478  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1124 09:28:18.342621  313419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:28:18.351309  313419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:28:18.364486  313419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:28:18.478645  313419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1124 09:28:18.492939  313419 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:28:18.497329  313419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:28:18.539648  313419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:28:18.627269  313419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:28:18.648964  313419 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348 for IP: 192.168.94.2
	I1124 09:28:18.648988  313419 certs.go:195] generating shared ca certs ...
	I1124 09:28:18.649007  313419 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:18.649141  313419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:28:18.649190  313419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:28:18.649200  313419 certs.go:257] generating profile certs ...
	I1124 09:28:18.649248  313419 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/client.key
	I1124 09:28:18.649260  313419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/client.crt with IP's: []
	I1124 09:28:18.762319  313419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/client.crt ...
	I1124 09:28:18.762357  313419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/client.crt: {Name:mkc3df83c3d3d26acd0294a10cc680087a9ca444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:18.762548  313419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/client.key ...
	I1124 09:28:18.762562  313419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/client.key: {Name:mk74e2cc54422f685b6dfcdbb07074a992fc44a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:18.762677  313419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.key.64ae9983
	I1124 09:28:18.762694  313419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.crt.64ae9983 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 09:28:18.845549  313419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.crt.64ae9983 ...
	I1124 09:28:18.845574  313419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.crt.64ae9983: {Name:mkca6f22530361bfa1e33edb5e225a2ae2d6c7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:18.845753  313419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.key.64ae9983 ...
	I1124 09:28:18.845772  313419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.key.64ae9983: {Name:mk1e1a56d975c8160b4e92c4b7c8966a576ec20f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:18.845877  313419 certs.go:382] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.crt.64ae9983 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.crt
	I1124 09:28:18.845962  313419 certs.go:386] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.key.64ae9983 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.key
	I1124 09:28:18.846033  313419 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.key
	I1124 09:28:18.846049  313419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.crt with IP's: []
	I1124 09:28:18.905125  313419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.crt ...
	I1124 09:28:18.905147  313419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.crt: {Name:mk1722d064e2b0ebd218b2e79ff6dbe3f4b6628c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:18.905307  313419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.key ...
	I1124 09:28:18.905324  313419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.key: {Name:mk05369326bf1a5192a1415933e53606456bee63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:18.905560  313419 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:28:18.905606  313419 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:28:18.905617  313419 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:28:18.905640  313419 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:28:18.905663  313419 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:28:18.905693  313419 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:28:18.905735  313419 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:28:18.906414  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:28:18.927670  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:28:18.947714  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:28:18.967773  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:28:18.987844  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:28:19.007997  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:28:19.028614  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:28:19.049105  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:28:19.069871  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:28:19.094957  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:28:19.115444  313419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:28:19.135525  313419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:28:19.149139  313419 ssh_runner.go:195] Run: openssl version
	I1124 09:28:19.156148  313419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:28:19.165503  313419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:28:19.170559  313419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:28:19.170621  313419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:28:19.216116  313419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:28:19.226288  313419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:28:19.235851  313419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:28:19.240831  313419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:28:19.240895  313419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:28:19.277926  313419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:28:19.288883  313419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:28:19.298559  313419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:28:19.303160  313419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:28:19.303220  313419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:28:19.346973  313419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:28:19.356650  313419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:28:19.360899  313419 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:28:19.360952  313419 kubeadm.go:401] StartCluster: {Name:no-preload-938348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:28:19.361032  313419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:28:19.361069  313419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:28:19.388459  313419 cri.go:89] found id: ""
	I1124 09:28:19.388524  313419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:28:19.397486  313419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:28:19.406410  313419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:28:19.406469  313419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:28:19.417069  313419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:28:19.417090  313419 kubeadm.go:158] found existing configuration files:
	
	I1124 09:28:19.417138  313419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:28:19.427185  313419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:28:19.427238  313419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:28:19.436358  313419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:28:19.446618  313419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:28:19.446677  313419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:28:19.454820  313419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:28:19.462886  313419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:28:19.462933  313419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:28:19.471015  313419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:28:19.479750  313419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:28:19.479807  313419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:28:19.487851  313419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:28:19.611057  313419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:28:19.672180  313419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:28:18.870056  255979 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:28:18.870524  255979 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1124 09:28:18.870593  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:28:18.870668  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:28:18.899123  255979 cri.go:89] found id: "34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd"
	I1124 09:28:18.899148  255979 cri.go:89] found id: "45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2"
	I1124 09:28:18.899154  255979 cri.go:89] found id: ""
	I1124 09:28:18.899164  255979 logs.go:282] 2 containers: [34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd 45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2]
	I1124 09:28:18.899234  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:18.903493  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:18.907519  255979 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:28:18.907585  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:28:18.937069  255979 cri.go:89] found id: "bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c"
	I1124 09:28:18.937092  255979 cri.go:89] found id: ""
	I1124 09:28:18.937101  255979 logs.go:282] 1 containers: [bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c]
	I1124 09:28:18.937158  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:18.941044  255979 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:28:18.941115  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:28:18.969059  255979 cri.go:89] found id: ""
	I1124 09:28:18.969085  255979 logs.go:282] 0 containers: []
	W1124 09:28:18.969095  255979 logs.go:284] No container was found matching "coredns"
	I1124 09:28:18.969103  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:28:18.969153  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:28:18.997824  255979 cri.go:89] found id: "fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433"
	I1124 09:28:18.997847  255979 cri.go:89] found id: "8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe"
	I1124 09:28:18.997853  255979 cri.go:89] found id: "4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484"
	I1124 09:28:18.997857  255979 cri.go:89] found id: ""
	I1124 09:28:18.997866  255979 logs.go:282] 3 containers: [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433 8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe 4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484]
	I1124 09:28:18.997930  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:19.001924  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:19.005745  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:19.009890  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:28:19.009941  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:28:19.038348  255979 cri.go:89] found id: ""
	I1124 09:28:19.038375  255979 logs.go:282] 0 containers: []
	W1124 09:28:19.038386  255979 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:28:19.038393  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:28:19.038449  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:28:19.067273  255979 cri.go:89] found id: "df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8"
	I1124 09:28:19.067299  255979 cri.go:89] found id: "233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564"
	I1124 09:28:19.067306  255979 cri.go:89] found id: ""
	I1124 09:28:19.067317  255979 logs.go:282] 2 containers: [df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8 233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564]
	I1124 09:28:19.067388  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:19.072394  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:19.076861  255979 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:28:19.076927  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:28:19.106183  255979 cri.go:89] found id: ""
	I1124 09:28:19.106207  255979 logs.go:282] 0 containers: []
	W1124 09:28:19.106216  255979 logs.go:284] No container was found matching "kindnet"
	I1124 09:28:19.106223  255979 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:28:19.106291  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:28:19.134265  255979 cri.go:89] found id: ""
	I1124 09:28:19.134292  255979 logs.go:282] 0 containers: []
	W1124 09:28:19.134302  255979 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:28:19.134311  255979 logs.go:123] Gathering logs for dmesg ...
	I1124 09:28:19.134323  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:28:19.149657  255979 logs.go:123] Gathering logs for etcd [bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c] ...
	I1124 09:28:19.149679  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c"
	I1124 09:28:19.183966  255979 logs.go:123] Gathering logs for kube-scheduler [8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe] ...
	I1124 09:28:19.184004  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe"
	I1124 09:28:19.261704  255979 logs.go:123] Gathering logs for kube-controller-manager [df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8] ...
	I1124 09:28:19.261731  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8"
	I1124 09:28:19.289569  255979 logs.go:123] Gathering logs for kube-controller-manager [233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564] ...
	I1124 09:28:19.289598  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564"
	I1124 09:28:19.317242  255979 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:28:19.317268  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:28:19.387844  255979 logs.go:123] Gathering logs for container status ...
	I1124 09:28:19.387882  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:28:19.421804  255979 logs.go:123] Gathering logs for kubelet ...
	I1124 09:28:19.421849  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:28:19.514080  255979 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:28:19.514106  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:28:19.585387  255979 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:28:19.585413  255979 logs.go:123] Gathering logs for kube-apiserver [34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd] ...
	I1124 09:28:19.585430  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd"
	I1124 09:28:19.619598  255979 logs.go:123] Gathering logs for kube-apiserver [45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2] ...
	I1124 09:28:19.619631  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2"
	I1124 09:28:19.659140  255979 logs.go:123] Gathering logs for kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433] ...
	I1124 09:28:19.659171  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433"
	W1124 09:28:19.688518  255979 logs.go:138] Found kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433] problem: E1124 09:27:59.715466       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1124 09:28:19.688548  255979 logs.go:123] Gathering logs for kube-scheduler [4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484] ...
	I1124 09:28:19.688565  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484"
	I1124 09:28:19.718091  255979 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:19.718114  255979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1124 09:28:19.718171  255979 out.go:285] X Problems detected in kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433]:
	W1124 09:28:19.718187  255979 out.go:285]   E1124 09:27:59.715466       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1124 09:28:19.718194  255979 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:19.718199  255979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1124 09:28:18.242240  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	W1124 09:28:20.741440  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	I1124 09:28:18.507284  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:19.006684  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:19.506617  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:20.007676  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:20.507144  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:21.007264  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:21.507440  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:22.007323  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:22.507437  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:23.006760  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:23.507561  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:24.007277  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:24.507352  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:25.007528  310299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:25.096552  310299 kubeadm.go:1114] duration metric: took 12.200086868s to wait for elevateKubeSystemPrivileges
	I1124 09:28:25.096595  310299 kubeadm.go:403] duration metric: took 21.286765255s to StartCluster
	I1124 09:28:25.096617  310299 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:25.096698  310299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:28:25.098007  310299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:25.098206  310299 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:28:25.098217  310299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:28:25.098244  310299 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:28:25.098408  310299 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-767267"
	I1124 09:28:25.098421  310299 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-767267"
	I1124 09:28:25.098430  310299 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-767267"
	I1124 09:28:25.098438  310299 config.go:182] Loaded profile config "old-k8s-version-767267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 09:28:25.098440  310299 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-767267"
	I1124 09:28:25.098464  310299 host.go:66] Checking if "old-k8s-version-767267" exists ...
	I1124 09:28:25.098783  310299 cli_runner.go:164] Run: docker container inspect old-k8s-version-767267 --format={{.State.Status}}
	I1124 09:28:25.098932  310299 cli_runner.go:164] Run: docker container inspect old-k8s-version-767267 --format={{.State.Status}}
	I1124 09:28:25.099771  310299 out.go:179] * Verifying Kubernetes components...
	I1124 09:28:25.101658  310299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:28:25.127642  310299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 09:28:22.743121  298357 pod_ready.go:104] pod "coredns-66bc5c9577-zzkl8" is not "Ready", error: <nil>
	I1124 09:28:24.299447  298357 pod_ready.go:94] pod "coredns-66bc5c9577-zzkl8" is "Ready"
	I1124 09:28:24.299479  298357 pod_ready.go:86] duration metric: took 38.064001652s for pod "coredns-66bc5c9577-zzkl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:24.475554  298357 pod_ready.go:83] waiting for pod "etcd-bridge-949664" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:24.481242  298357 pod_ready.go:94] pod "etcd-bridge-949664" is "Ready"
	I1124 09:28:24.481268  298357 pod_ready.go:86] duration metric: took 5.675069ms for pod "etcd-bridge-949664" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:24.484490  298357 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-949664" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:24.515075  298357 pod_ready.go:94] pod "kube-apiserver-bridge-949664" is "Ready"
	I1124 09:28:24.515105  298357 pod_ready.go:86] duration metric: took 30.572615ms for pod "kube-apiserver-bridge-949664" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:24.585506  298357 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-949664" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:24.591299  298357 pod_ready.go:94] pod "kube-controller-manager-bridge-949664" is "Ready"
	I1124 09:28:24.591328  298357 pod_ready.go:86] duration metric: took 5.792589ms for pod "kube-controller-manager-bridge-949664" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:24.640618  298357 pod_ready.go:83] waiting for pod "kube-proxy-qxlsq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:25.040239  298357 pod_ready.go:94] pod "kube-proxy-qxlsq" is "Ready"
	I1124 09:28:25.040271  298357 pod_ready.go:86] duration metric: took 399.629308ms for pod "kube-proxy-qxlsq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:25.241208  298357 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-949664" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:25.640061  298357 pod_ready.go:94] pod "kube-scheduler-bridge-949664" is "Ready"
	I1124 09:28:25.640098  298357 pod_ready.go:86] duration metric: took 398.859742ms for pod "kube-scheduler-bridge-949664" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:25.640114  298357 pod_ready.go:40] duration metric: took 39.408918644s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:28:25.713764  298357 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:28:25.715602  298357 out.go:179] * Done! kubectl is now configured to use "bridge-949664" cluster and "default" namespace by default
	I1124 09:28:25.128997  310299 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:28:25.129016  310299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:28:25.129073  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:25.134011  310299 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-767267"
	I1124 09:28:25.134109  310299 host.go:66] Checking if "old-k8s-version-767267" exists ...
	I1124 09:28:25.134637  310299 cli_runner.go:164] Run: docker container inspect old-k8s-version-767267 --format={{.State.Status}}
	I1124 09:28:25.157875  310299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/old-k8s-version-767267/id_rsa Username:docker}
	I1124 09:28:25.165662  310299 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:28:25.165730  310299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:28:25.165794  310299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:28:25.192063  310299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/old-k8s-version-767267/id_rsa Username:docker}
	I1124 09:28:25.217767  310299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:28:25.278633  310299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:28:25.323760  310299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:28:25.357067  310299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:28:25.604309  310299 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 09:28:25.606110  310299 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-767267" to be "Ready" ...
	I1124 09:28:25.965646  310299 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:28:25.967385  310299 addons.go:530] duration metric: took 869.143881ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:28:26.111534  310299 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-767267" context rescaled to 1 replicas
	W1124 09:28:27.609872  310299 node_ready.go:57] node "old-k8s-version-767267" has "Ready":"False" status (will retry)
	I1124 09:28:28.869234  313419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:28:28.869352  313419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:28:28.869509  313419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:28:28.869613  313419 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:28:28.869673  313419 kubeadm.go:319] OS: Linux
	I1124 09:28:28.869754  313419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:28:28.869796  313419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:28:28.869882  313419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:28:28.869961  313419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:28:28.870027  313419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:28:28.870114  313419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:28:28.870190  313419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:28:28.870251  313419 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:28:28.870369  313419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:28:28.870472  313419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:28:28.870606  313419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:28:28.870671  313419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:28:28.872834  313419 out.go:252]   - Generating certificates and keys ...
	I1124 09:28:28.872901  313419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:28:28.872974  313419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:28:28.873051  313419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:28:28.873112  313419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:28:28.873184  313419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:28:28.873256  313419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:28:28.873356  313419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:28:28.873480  313419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-938348] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 09:28:28.873543  313419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:28:28.873682  313419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-938348] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 09:28:28.873749  313419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:28:28.873804  313419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:28:28.873843  313419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:28:28.873894  313419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:28:28.873938  313419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:28:28.873986  313419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:28:28.874032  313419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:28:28.874097  313419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:28:28.874155  313419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:28:28.874228  313419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:28:28.874288  313419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:28:28.875604  313419 out.go:252]   - Booting up control plane ...
	I1124 09:28:28.875679  313419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:28:28.875761  313419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:28:28.875834  313419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:28:28.875930  313419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:28:28.876017  313419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:28:28.876104  313419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:28:28.876186  313419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:28:28.876222  313419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:28:28.876358  313419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:28:28.876465  313419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:28:28.876526  313419 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.792661ms
	I1124 09:28:28.876614  313419 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:28:28.876697  313419 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 09:28:28.876810  313419 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:28:28.876883  313419 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:28:28.876956  313419 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.064109364s
	I1124 09:28:28.877019  313419 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.563439213s
	I1124 09:28:28.877078  313419 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501960516s
	I1124 09:28:28.877177  313419 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:28:28.877304  313419 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:28:28.877426  313419 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:28:28.877632  313419 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-938348 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:28:28.877724  313419 kubeadm.go:319] [bootstrap-token] Using token: so3rpj.b838mc7c1pj378eg
	I1124 09:28:28.879777  313419 out.go:252]   - Configuring RBAC rules ...
	I1124 09:28:28.879887  313419 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:28:28.879982  313419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:28:28.880155  313419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:28:28.880281  313419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:28:28.880412  313419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:28:28.880491  313419 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:28:28.880609  313419 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:28:28.880657  313419 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:28:28.880703  313419 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:28:28.880708  313419 kubeadm.go:319] 
	I1124 09:28:28.880760  313419 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:28:28.880767  313419 kubeadm.go:319] 
	I1124 09:28:28.880831  313419 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:28:28.880836  313419 kubeadm.go:319] 
	I1124 09:28:28.880858  313419 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:28:28.880917  313419 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:28:28.880961  313419 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:28:28.880966  313419 kubeadm.go:319] 
	I1124 09:28:28.881015  313419 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:28:28.881020  313419 kubeadm.go:319] 
	I1124 09:28:28.881079  313419 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:28:28.881085  313419 kubeadm.go:319] 
	I1124 09:28:28.881132  313419 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:28:28.881199  313419 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:28:28.881261  313419 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:28:28.881266  313419 kubeadm.go:319] 
	I1124 09:28:28.881350  313419 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:28:28.881432  313419 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:28:28.881442  313419 kubeadm.go:319] 
	I1124 09:28:28.881583  313419 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token so3rpj.b838mc7c1pj378eg \
	I1124 09:28:28.881706  313419 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 \
	I1124 09:28:28.881739  313419 kubeadm.go:319] 	--control-plane 
	I1124 09:28:28.881752  313419 kubeadm.go:319] 
	I1124 09:28:28.881874  313419 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:28:28.881886  313419 kubeadm.go:319] 
	I1124 09:28:28.882005  313419 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token so3rpj.b838mc7c1pj378eg \
	I1124 09:28:28.882161  313419 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 
	I1124 09:28:28.882185  313419 cni.go:84] Creating CNI manager for ""
	I1124 09:28:28.882194  313419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:28:28.884379  313419 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:28:28.885449  313419 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:28:28.890264  313419 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1124 09:28:28.890286  313419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:28:28.904191  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:28:29.110855  313419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:28:29.110921  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:29.110988  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-938348 minikube.k8s.io/updated_at=2025_11_24T09_28_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=no-preload-938348 minikube.k8s.io/primary=true
	I1124 09:28:29.122189  313419 ops.go:34] apiserver oom_adj: -16
	I1124 09:28:29.180604  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:29.681294  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:30.180638  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:29.720630  255979 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:28:29.721034  255979 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1124 09:28:29.721084  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:28:29.721134  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:28:29.750453  255979 cri.go:89] found id: "34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd"
	I1124 09:28:29.750477  255979 cri.go:89] found id: "45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2"
	I1124 09:28:29.750483  255979 cri.go:89] found id: ""
	I1124 09:28:29.750492  255979 logs.go:282] 2 containers: [34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd 45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2]
	I1124 09:28:29.750549  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:29.754741  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:29.758442  255979 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:28:29.758521  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:28:29.785761  255979 cri.go:89] found id: "bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c"
	I1124 09:28:29.785780  255979 cri.go:89] found id: ""
	I1124 09:28:29.785790  255979 logs.go:282] 1 containers: [bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c]
	I1124 09:28:29.785847  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:29.789750  255979 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:28:29.789810  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:28:29.816954  255979 cri.go:89] found id: ""
	I1124 09:28:29.816977  255979 logs.go:282] 0 containers: []
	W1124 09:28:29.816989  255979 logs.go:284] No container was found matching "coredns"
	I1124 09:28:29.816997  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:28:29.817056  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:28:29.843961  255979 cri.go:89] found id: "fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433"
	I1124 09:28:29.843983  255979 cri.go:89] found id: "8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe"
	I1124 09:28:29.843989  255979 cri.go:89] found id: "4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484"
	I1124 09:28:29.843993  255979 cri.go:89] found id: ""
	I1124 09:28:29.844002  255979 logs.go:282] 3 containers: [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433 8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe 4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484]
	I1124 09:28:29.844062  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:29.848053  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:29.851699  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:29.855269  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:28:29.855315  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:28:29.882919  255979 cri.go:89] found id: ""
	I1124 09:28:29.882941  255979 logs.go:282] 0 containers: []
	W1124 09:28:29.882948  255979 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:28:29.882954  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:28:29.883001  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:28:29.909584  255979 cri.go:89] found id: "df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8"
	I1124 09:28:29.909602  255979 cri.go:89] found id: "233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564"
	I1124 09:28:29.909606  255979 cri.go:89] found id: ""
	I1124 09:28:29.909614  255979 logs.go:282] 2 containers: [df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8 233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564]
	I1124 09:28:29.909660  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:29.913934  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:29.918182  255979 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:28:29.918245  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:28:29.944821  255979 cri.go:89] found id: ""
	I1124 09:28:29.944846  255979 logs.go:282] 0 containers: []
	W1124 09:28:29.944853  255979 logs.go:284] No container was found matching "kindnet"
	I1124 09:28:29.944859  255979 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:28:29.944903  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:28:29.972096  255979 cri.go:89] found id: ""
	I1124 09:28:29.972121  255979 logs.go:282] 0 containers: []
	W1124 09:28:29.972129  255979 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:28:29.972139  255979 logs.go:123] Gathering logs for kube-apiserver [34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd] ...
	I1124 09:28:29.972154  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd"
	I1124 09:28:30.003993  255979 logs.go:123] Gathering logs for kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433] ...
	I1124 09:28:30.004023  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433"
	W1124 09:28:30.032130  255979 logs.go:138] Found kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433] problem: E1124 09:27:59.715466       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1124 09:28:30.032154  255979 logs.go:123] Gathering logs for kube-scheduler [8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe] ...
	I1124 09:28:30.032168  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe"
	I1124 09:28:30.107824  255979 logs.go:123] Gathering logs for kube-scheduler [4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484] ...
	I1124 09:28:30.107853  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484"
	I1124 09:28:30.138329  255979 logs.go:123] Gathering logs for kube-controller-manager [df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8] ...
	I1124 09:28:30.138373  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8"
	I1124 09:28:30.164375  255979 logs.go:123] Gathering logs for kube-apiserver [45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2] ...
	I1124 09:28:30.164405  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2"
	I1124 09:28:30.208301  255979 logs.go:123] Gathering logs for etcd [bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c] ...
	I1124 09:28:30.208343  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c"
	I1124 09:28:30.245800  255979 logs.go:123] Gathering logs for kube-controller-manager [233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564] ...
	I1124 09:28:30.245828  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564"
	I1124 09:28:30.275109  255979 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:28:30.275137  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:28:30.345539  255979 logs.go:123] Gathering logs for container status ...
	I1124 09:28:30.345568  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:28:30.378177  255979 logs.go:123] Gathering logs for kubelet ...
	I1124 09:28:30.378204  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:28:30.471345  255979 logs.go:123] Gathering logs for dmesg ...
	I1124 09:28:30.471382  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:28:30.486887  255979 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:28:30.486910  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:28:30.544096  255979 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:28:30.544117  255979 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:30.544127  255979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1124 09:28:30.544173  255979 out.go:285] X Problems detected in kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433]:
	W1124 09:28:30.544184  255979 out.go:285]   E1124 09:27:59.715466       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1124 09:28:30.544189  255979 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:30.544195  255979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1124 09:28:30.110185  310299 node_ready.go:57] node "old-k8s-version-767267" has "Ready":"False" status (will retry)
	W1124 09:28:32.609477  310299 node_ready.go:57] node "old-k8s-version-767267" has "Ready":"False" status (will retry)
	I1124 09:28:30.680860  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:31.181517  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:31.680646  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:32.181527  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:32.681289  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:33.181392  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:33.681406  313419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:28:33.746873  313419 kubeadm.go:1114] duration metric: took 4.635997991s to wait for elevateKubeSystemPrivileges
	I1124 09:28:33.746913  313419 kubeadm.go:403] duration metric: took 14.385965067s to StartCluster
	I1124 09:28:33.746936  313419 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:33.747023  313419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:28:33.748842  313419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:33.749074  313419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:28:33.749081  313419 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:28:33.749181  313419 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:28:33.749280  313419 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:28:33.749290  313419 addons.go:70] Setting storage-provisioner=true in profile "no-preload-938348"
	I1124 09:28:33.749312  313419 addons.go:239] Setting addon storage-provisioner=true in "no-preload-938348"
	I1124 09:28:33.749359  313419 addons.go:70] Setting default-storageclass=true in profile "no-preload-938348"
	I1124 09:28:33.749374  313419 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-938348"
	I1124 09:28:33.749377  313419 host.go:66] Checking if "no-preload-938348" exists ...
	I1124 09:28:33.749719  313419 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:28:33.749898  313419 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:28:33.751382  313419 out.go:179] * Verifying Kubernetes components...
	I1124 09:28:33.752858  313419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:28:33.776066  313419 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:28:33.777458  313419 addons.go:239] Setting addon default-storageclass=true in "no-preload-938348"
	I1124 09:28:33.777503  313419 host.go:66] Checking if "no-preload-938348" exists ...
	I1124 09:28:33.777920  313419 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:28:33.777937  313419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:28:33.777958  313419 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:28:33.777976  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:33.800650  313419 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:28:33.800675  313419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:28:33.800731  313419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:28:33.801085  313419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:28:33.831241  313419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:28:33.841526  313419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:28:33.901839  313419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:28:33.933192  313419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:28:33.950499  313419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:28:34.046620  313419 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 09:28:34.049906  313419 node_ready.go:35] waiting up to 6m0s for node "no-preload-938348" to be "Ready" ...
	I1124 09:28:34.355834  313419 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:28:34.357112  313419 addons.go:530] duration metric: took 607.920671ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:28:34.553074  313419 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-938348" context rescaled to 1 replicas
	W1124 09:28:34.610121  310299 node_ready.go:57] node "old-k8s-version-767267" has "Ready":"False" status (will retry)
	W1124 09:28:37.109701  310299 node_ready.go:57] node "old-k8s-version-767267" has "Ready":"False" status (will retry)
	I1124 09:28:37.609123  310299 node_ready.go:49] node "old-k8s-version-767267" is "Ready"
	I1124 09:28:37.609147  310299 node_ready.go:38] duration metric: took 12.002991478s for node "old-k8s-version-767267" to be "Ready" ...
	I1124 09:28:37.609161  310299 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:28:37.609206  310299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:28:37.621350  310299 api_server.go:72] duration metric: took 12.523089064s to wait for apiserver process to appear ...
	I1124 09:28:37.621374  310299 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:28:37.621392  310299 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:28:37.625317  310299 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 09:28:37.626499  310299 api_server.go:141] control plane version: v1.28.0
	I1124 09:28:37.626521  310299 api_server.go:131] duration metric: took 5.141286ms to wait for apiserver health ...
	I1124 09:28:37.626528  310299 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:28:37.630500  310299 system_pods.go:59] 8 kube-system pods found
	I1124 09:28:37.630545  310299 system_pods.go:61] "coredns-5dd5756b68-gmgwv" [fa53b4e5-62ed-42ac-82be-5f220cd9ab0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:28:37.630557  310299 system_pods.go:61] "etcd-old-k8s-version-767267" [aff80338-4222-4ee0-990e-71d85ab84883] Running
	I1124 09:28:37.630579  310299 system_pods.go:61] "kindnet-8tdrm" [de72ff2b-7361-460c-b1e8-288fb9a6eb03] Running
	I1124 09:28:37.630584  310299 system_pods.go:61] "kube-apiserver-old-k8s-version-767267" [4af980c6-66d6-4b78-86a1-e0560b86f196] Running
	I1124 09:28:37.630593  310299 system_pods.go:61] "kube-controller-manager-old-k8s-version-767267" [ad989491-57c5-4844-9a39-61df766e8110] Running
	I1124 09:28:37.630598  310299 system_pods.go:61] "kube-proxy-b8kgc" [318115cc-de22-4a55-a7aa-2acc886827d8] Running
	I1124 09:28:37.630606  310299 system_pods.go:61] "kube-scheduler-old-k8s-version-767267" [d6f9519e-96af-4db1-855c-b4ac6e09c533] Running
	I1124 09:28:37.630613  310299 system_pods.go:61] "storage-provisioner" [6347c3c7-cb5b-42ab-abb8-9ca37af285b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:28:37.630620  310299 system_pods.go:74] duration metric: took 4.087398ms to wait for pod list to return data ...
	I1124 09:28:37.630627  310299 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:28:37.632323  310299 default_sa.go:45] found service account: "default"
	I1124 09:28:37.632356  310299 default_sa.go:55] duration metric: took 1.721556ms for default service account to be created ...
	I1124 09:28:37.632365  310299 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:28:37.635263  310299 system_pods.go:86] 8 kube-system pods found
	I1124 09:28:37.635290  310299 system_pods.go:89] "coredns-5dd5756b68-gmgwv" [fa53b4e5-62ed-42ac-82be-5f220cd9ab0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:28:37.635298  310299 system_pods.go:89] "etcd-old-k8s-version-767267" [aff80338-4222-4ee0-990e-71d85ab84883] Running
	I1124 09:28:37.635310  310299 system_pods.go:89] "kindnet-8tdrm" [de72ff2b-7361-460c-b1e8-288fb9a6eb03] Running
	I1124 09:28:37.635316  310299 system_pods.go:89] "kube-apiserver-old-k8s-version-767267" [4af980c6-66d6-4b78-86a1-e0560b86f196] Running
	I1124 09:28:37.635325  310299 system_pods.go:89] "kube-controller-manager-old-k8s-version-767267" [ad989491-57c5-4844-9a39-61df766e8110] Running
	I1124 09:28:37.635342  310299 system_pods.go:89] "kube-proxy-b8kgc" [318115cc-de22-4a55-a7aa-2acc886827d8] Running
	I1124 09:28:37.635348  310299 system_pods.go:89] "kube-scheduler-old-k8s-version-767267" [d6f9519e-96af-4db1-855c-b4ac6e09c533] Running
	I1124 09:28:37.635364  310299 system_pods.go:89] "storage-provisioner" [6347c3c7-cb5b-42ab-abb8-9ca37af285b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:28:37.635388  310299 retry.go:31] will retry after 187.765851ms: missing components: kube-dns
	I1124 09:28:37.827459  310299 system_pods.go:86] 8 kube-system pods found
	I1124 09:28:37.827490  310299 system_pods.go:89] "coredns-5dd5756b68-gmgwv" [fa53b4e5-62ed-42ac-82be-5f220cd9ab0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:28:37.827499  310299 system_pods.go:89] "etcd-old-k8s-version-767267" [aff80338-4222-4ee0-990e-71d85ab84883] Running
	I1124 09:28:37.827507  310299 system_pods.go:89] "kindnet-8tdrm" [de72ff2b-7361-460c-b1e8-288fb9a6eb03] Running
	I1124 09:28:37.827513  310299 system_pods.go:89] "kube-apiserver-old-k8s-version-767267" [4af980c6-66d6-4b78-86a1-e0560b86f196] Running
	I1124 09:28:37.827518  310299 system_pods.go:89] "kube-controller-manager-old-k8s-version-767267" [ad989491-57c5-4844-9a39-61df766e8110] Running
	I1124 09:28:37.827523  310299 system_pods.go:89] "kube-proxy-b8kgc" [318115cc-de22-4a55-a7aa-2acc886827d8] Running
	I1124 09:28:37.827528  310299 system_pods.go:89] "kube-scheduler-old-k8s-version-767267" [d6f9519e-96af-4db1-855c-b4ac6e09c533] Running
	I1124 09:28:37.827536  310299 system_pods.go:89] "storage-provisioner" [6347c3c7-cb5b-42ab-abb8-9ca37af285b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:28:37.827560  310299 retry.go:31] will retry after 356.083249ms: missing components: kube-dns
	I1124 09:28:38.188653  310299 system_pods.go:86] 8 kube-system pods found
	I1124 09:28:38.188679  310299 system_pods.go:89] "coredns-5dd5756b68-gmgwv" [fa53b4e5-62ed-42ac-82be-5f220cd9ab0a] Running
	I1124 09:28:38.188684  310299 system_pods.go:89] "etcd-old-k8s-version-767267" [aff80338-4222-4ee0-990e-71d85ab84883] Running
	I1124 09:28:38.188687  310299 system_pods.go:89] "kindnet-8tdrm" [de72ff2b-7361-460c-b1e8-288fb9a6eb03] Running
	I1124 09:28:38.188691  310299 system_pods.go:89] "kube-apiserver-old-k8s-version-767267" [4af980c6-66d6-4b78-86a1-e0560b86f196] Running
	I1124 09:28:38.188695  310299 system_pods.go:89] "kube-controller-manager-old-k8s-version-767267" [ad989491-57c5-4844-9a39-61df766e8110] Running
	I1124 09:28:38.188698  310299 system_pods.go:89] "kube-proxy-b8kgc" [318115cc-de22-4a55-a7aa-2acc886827d8] Running
	I1124 09:28:38.188701  310299 system_pods.go:89] "kube-scheduler-old-k8s-version-767267" [d6f9519e-96af-4db1-855c-b4ac6e09c533] Running
	I1124 09:28:38.188706  310299 system_pods.go:89] "storage-provisioner" [6347c3c7-cb5b-42ab-abb8-9ca37af285b5] Running
	I1124 09:28:38.188716  310299 system_pods.go:126] duration metric: took 556.344549ms to wait for k8s-apps to be running ...
	I1124 09:28:38.188734  310299 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:28:38.188776  310299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:28:38.202535  310299 system_svc.go:56] duration metric: took 13.791964ms WaitForService to wait for kubelet
	I1124 09:28:38.202565  310299 kubeadm.go:587] duration metric: took 13.104328594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:28:38.202580  310299 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:28:38.205038  310299 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:28:38.205072  310299 node_conditions.go:123] node cpu capacity is 8
	I1124 09:28:38.205089  310299 node_conditions.go:105] duration metric: took 2.503618ms to run NodePressure ...
	I1124 09:28:38.205103  310299 start.go:242] waiting for startup goroutines ...
	I1124 09:28:38.205112  310299 start.go:247] waiting for cluster config update ...
	I1124 09:28:38.205124  310299 start.go:256] writing updated cluster config ...
	I1124 09:28:38.205451  310299 ssh_runner.go:195] Run: rm -f paused
	I1124 09:28:38.209540  310299 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:28:38.214127  310299 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gmgwv" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:38.218621  310299 pod_ready.go:94] pod "coredns-5dd5756b68-gmgwv" is "Ready"
	I1124 09:28:38.218640  310299 pod_ready.go:86] duration metric: took 4.489062ms for pod "coredns-5dd5756b68-gmgwv" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:38.221399  310299 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:38.225366  310299 pod_ready.go:94] pod "etcd-old-k8s-version-767267" is "Ready"
	I1124 09:28:38.225398  310299 pod_ready.go:86] duration metric: took 3.979509ms for pod "etcd-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:38.227861  310299 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:38.231850  310299 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-767267" is "Ready"
	I1124 09:28:38.231867  310299 pod_ready.go:86] duration metric: took 3.986577ms for pod "kube-apiserver-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:38.234472  310299 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:38.614004  310299 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-767267" is "Ready"
	I1124 09:28:38.614027  310299 pod_ready.go:86] duration metric: took 379.535459ms for pod "kube-controller-manager-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:38.814635  310299 pod_ready.go:83] waiting for pod "kube-proxy-b8kgc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:39.213952  310299 pod_ready.go:94] pod "kube-proxy-b8kgc" is "Ready"
	I1124 09:28:39.213978  310299 pod_ready.go:86] duration metric: took 399.319022ms for pod "kube-proxy-b8kgc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:39.414360  310299 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:39.814314  310299 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-767267" is "Ready"
	I1124 09:28:39.814351  310299 pod_ready.go:86] duration metric: took 399.969226ms for pod "kube-scheduler-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:39.814368  310299 pod_ready.go:40] duration metric: took 1.604797958s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:28:39.857461  310299 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 09:28:39.858654  310299 out.go:203] 
	W1124 09:28:39.859888  310299 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 09:28:39.861115  310299 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:28:39.862818  310299 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-767267" cluster and "default" namespace by default
	W1124 09:28:36.052717  313419 node_ready.go:57] node "no-preload-938348" has "Ready":"False" status (will retry)
	W1124 09:28:38.052752  313419 node_ready.go:57] node "no-preload-938348" has "Ready":"False" status (will retry)
	W1124 09:28:40.053758  313419 node_ready.go:57] node "no-preload-938348" has "Ready":"False" status (will retry)
	I1124 09:28:40.546005  255979 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:28:40.546431  255979 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1124 09:28:40.546485  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:28:40.546530  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:28:40.572584  255979 cri.go:89] found id: "34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd"
	I1124 09:28:40.572605  255979 cri.go:89] found id: "45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2"
	I1124 09:28:40.572619  255979 cri.go:89] found id: ""
	I1124 09:28:40.572633  255979 logs.go:282] 2 containers: [34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd 45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2]
	I1124 09:28:40.572684  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:40.576730  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:40.580666  255979 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:28:40.580730  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:28:40.611396  255979 cri.go:89] found id: "bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c"
	I1124 09:28:40.611417  255979 cri.go:89] found id: ""
	I1124 09:28:40.611427  255979 logs.go:282] 1 containers: [bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c]
	I1124 09:28:40.611485  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:40.615590  255979 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:28:40.615656  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:28:40.645687  255979 cri.go:89] found id: ""
	I1124 09:28:40.645713  255979 logs.go:282] 0 containers: []
	W1124 09:28:40.645721  255979 logs.go:284] No container was found matching "coredns"
	I1124 09:28:40.645726  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:28:40.645779  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:28:40.680477  255979 cri.go:89] found id: "fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433"
	I1124 09:28:40.680494  255979 cri.go:89] found id: "8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe"
	I1124 09:28:40.680501  255979 cri.go:89] found id: "4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484"
	I1124 09:28:40.680507  255979 cri.go:89] found id: ""
	I1124 09:28:40.680515  255979 logs.go:282] 3 containers: [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433 8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe 4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484]
	I1124 09:28:40.680565  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:40.684856  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:40.689084  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:40.692587  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:28:40.692642  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:28:40.723428  255979 cri.go:89] found id: ""
	I1124 09:28:40.723455  255979 logs.go:282] 0 containers: []
	W1124 09:28:40.723466  255979 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:28:40.723474  255979 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:28:40.723616  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:28:40.757773  255979 cri.go:89] found id: "df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8"
	I1124 09:28:40.757797  255979 cri.go:89] found id: "233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564"
	I1124 09:28:40.757803  255979 cri.go:89] found id: ""
	I1124 09:28:40.757812  255979 logs.go:282] 2 containers: [df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8 233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564]
	I1124 09:28:40.757869  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:40.762316  255979 ssh_runner.go:195] Run: which crictl
	I1124 09:28:40.766359  255979 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:28:40.766427  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:28:40.795658  255979 cri.go:89] found id: ""
	I1124 09:28:40.795682  255979 logs.go:282] 0 containers: []
	W1124 09:28:40.795691  255979 logs.go:284] No container was found matching "kindnet"
	I1124 09:28:40.795699  255979 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:28:40.795756  255979 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:28:40.824894  255979 cri.go:89] found id: ""
	I1124 09:28:40.824919  255979 logs.go:282] 0 containers: []
	W1124 09:28:40.824927  255979 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:28:40.824936  255979 logs.go:123] Gathering logs for etcd [bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c] ...
	I1124 09:28:40.824947  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c"
	I1124 09:28:40.858658  255979 logs.go:123] Gathering logs for kube-scheduler [8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe] ...
	I1124 09:28:40.858686  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe"
	I1124 09:28:40.932751  255979 logs.go:123] Gathering logs for kube-scheduler [4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484] ...
	I1124 09:28:40.932782  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ac2d494823c1dee4e50e4e8d6f72f2a4b941359da4c2d7b23e76ef028258484"
	I1124 09:28:40.964263  255979 logs.go:123] Gathering logs for kube-controller-manager [233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564] ...
	I1124 09:28:40.964297  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564"
	I1124 09:28:40.994090  255979 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:28:40.994116  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:28:41.061171  255979 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:28:41.061196  255979 logs.go:123] Gathering logs for kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433] ...
	I1124 09:28:41.061213  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433"
	W1124 09:28:41.091005  255979 logs.go:138] Found kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433] problem: E1124 09:27:59.715466       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1124 09:28:41.091032  255979 logs.go:123] Gathering logs for kube-controller-manager [df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8] ...
	I1124 09:28:41.091049  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df96f6ed22f5a496bf4a36f67ccd9b7d6dbe2d9968d5eea2f35bc109201e75a8"
	I1124 09:28:41.123366  255979 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:28:41.123398  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:28:41.192720  255979 logs.go:123] Gathering logs for container status ...
	I1124 09:28:41.192751  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:28:41.226004  255979 logs.go:123] Gathering logs for kubelet ...
	I1124 09:28:41.226037  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:28:41.325479  255979 logs.go:123] Gathering logs for dmesg ...
	I1124 09:28:41.325508  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:28:41.345140  255979 logs.go:123] Gathering logs for kube-apiserver [34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd] ...
	I1124 09:28:41.345168  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 34c404ebac00d9805209c09a15f09f254fc604ba0c773dc3c04cf175594fd1fd"
	I1124 09:28:41.379232  255979 logs.go:123] Gathering logs for kube-apiserver [45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2] ...
	I1124 09:28:41.379256  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45bb5bce16bdb7c177f02b1a2d1ebf5ec2408f5f0d3bdf37896ddbf3a82295c2"
	I1124 09:28:41.423115  255979 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:41.423142  255979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1124 09:28:41.423201  255979 out.go:285] X Problems detected in kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433]:
	W1124 09:28:41.423216  255979 out.go:285]   E1124 09:27:59.715466       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1124 09:28:41.423225  255979 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:41.423234  255979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1124 09:28:42.553436  313419 node_ready.go:57] node "no-preload-938348" has "Ready":"False" status (will retry)
	W1124 09:28:44.553688  313419 node_ready.go:57] node "no-preload-938348" has "Ready":"False" status (will retry)
	I1124 09:28:46.053112  313419 node_ready.go:49] node "no-preload-938348" is "Ready"
	I1124 09:28:46.053137  313419 node_ready.go:38] duration metric: took 12.003179606s for node "no-preload-938348" to be "Ready" ...
	I1124 09:28:46.053151  313419 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:28:46.053193  313419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:28:46.065546  313419 api_server.go:72] duration metric: took 12.316432301s to wait for apiserver process to appear ...
	I1124 09:28:46.065573  313419 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:28:46.065592  313419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:28:46.071988  313419 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 09:28:46.073218  313419 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:28:46.073246  313419 api_server.go:131] duration metric: took 7.666642ms to wait for apiserver health ...
	I1124 09:28:46.073257  313419 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:28:46.076886  313419 system_pods.go:59] 8 kube-system pods found
	I1124 09:28:46.076923  313419 system_pods.go:61] "coredns-7d764666f9-ll2c4" [9f976359-8745-4fe5-8cc4-df9cafaca113] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:28:46.076931  313419 system_pods.go:61] "etcd-no-preload-938348" [f64c1f91-d65d-483d-9702-da61053fc34e] Running
	I1124 09:28:46.076939  313419 system_pods.go:61] "kindnet-zrnnf" [ade02f32-ef6b-4bca-b2da-3a67433a796c] Running
	I1124 09:28:46.076962  313419 system_pods.go:61] "kube-apiserver-no-preload-938348" [dc59fbc6-9b29-4422-826c-c65c23e5767b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:28:46.076973  313419 system_pods.go:61] "kube-controller-manager-no-preload-938348" [70a934f6-cdab-441e-b04f-cae5940dc0c1] Running
	I1124 09:28:46.076978  313419 system_pods.go:61] "kube-proxy-smqgp" [045fb194-89ac-48bb-a9af-24c93032274f] Running
	I1124 09:28:46.076983  313419 system_pods.go:61] "kube-scheduler-no-preload-938348" [5799f86f-5b8f-4492-9a26-d7a3749ae301] Running
	I1124 09:28:46.076990  313419 system_pods.go:61] "storage-provisioner" [701c213c-777c-488b-972b-2c1c4ad85d6a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:28:46.076997  313419 system_pods.go:74] duration metric: took 3.733337ms to wait for pod list to return data ...
	I1124 09:28:46.077008  313419 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:28:46.079914  313419 default_sa.go:45] found service account: "default"
	I1124 09:28:46.079934  313419 default_sa.go:55] duration metric: took 2.916056ms for default service account to be created ...
	I1124 09:28:46.079944  313419 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:28:46.083505  313419 system_pods.go:86] 8 kube-system pods found
	I1124 09:28:46.083537  313419 system_pods.go:89] "coredns-7d764666f9-ll2c4" [9f976359-8745-4fe5-8cc4-df9cafaca113] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:28:46.083545  313419 system_pods.go:89] "etcd-no-preload-938348" [f64c1f91-d65d-483d-9702-da61053fc34e] Running
	I1124 09:28:46.083553  313419 system_pods.go:89] "kindnet-zrnnf" [ade02f32-ef6b-4bca-b2da-3a67433a796c] Running
	I1124 09:28:46.083567  313419 system_pods.go:89] "kube-apiserver-no-preload-938348" [dc59fbc6-9b29-4422-826c-c65c23e5767b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:28:46.083580  313419 system_pods.go:89] "kube-controller-manager-no-preload-938348" [70a934f6-cdab-441e-b04f-cae5940dc0c1] Running
	I1124 09:28:46.083587  313419 system_pods.go:89] "kube-proxy-smqgp" [045fb194-89ac-48bb-a9af-24c93032274f] Running
	I1124 09:28:46.083594  313419 system_pods.go:89] "kube-scheduler-no-preload-938348" [5799f86f-5b8f-4492-9a26-d7a3749ae301] Running
	I1124 09:28:46.083609  313419 system_pods.go:89] "storage-provisioner" [701c213c-777c-488b-972b-2c1c4ad85d6a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:28:46.083653  313419 retry.go:31] will retry after 206.195017ms: missing components: kube-dns
	I1124 09:28:46.293773  313419 system_pods.go:86] 8 kube-system pods found
	I1124 09:28:46.293795  313419 system_pods.go:89] "coredns-7d764666f9-ll2c4" [9f976359-8745-4fe5-8cc4-df9cafaca113] Running
	I1124 09:28:46.293800  313419 system_pods.go:89] "etcd-no-preload-938348" [f64c1f91-d65d-483d-9702-da61053fc34e] Running
	I1124 09:28:46.293804  313419 system_pods.go:89] "kindnet-zrnnf" [ade02f32-ef6b-4bca-b2da-3a67433a796c] Running
	I1124 09:28:46.293810  313419 system_pods.go:89] "kube-apiserver-no-preload-938348" [dc59fbc6-9b29-4422-826c-c65c23e5767b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:28:46.293814  313419 system_pods.go:89] "kube-controller-manager-no-preload-938348" [70a934f6-cdab-441e-b04f-cae5940dc0c1] Running
	I1124 09:28:46.293819  313419 system_pods.go:89] "kube-proxy-smqgp" [045fb194-89ac-48bb-a9af-24c93032274f] Running
	I1124 09:28:46.293822  313419 system_pods.go:89] "kube-scheduler-no-preload-938348" [5799f86f-5b8f-4492-9a26-d7a3749ae301] Running
	I1124 09:28:46.293825  313419 system_pods.go:89] "storage-provisioner" [701c213c-777c-488b-972b-2c1c4ad85d6a] Running
	I1124 09:28:46.293832  313419 system_pods.go:126] duration metric: took 213.882341ms to wait for k8s-apps to be running ...
	I1124 09:28:46.293838  313419 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:28:46.293875  313419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:28:46.307052  313419 system_svc.go:56] duration metric: took 13.206716ms WaitForService to wait for kubelet
	I1124 09:28:46.307082  313419 kubeadm.go:587] duration metric: took 12.557968057s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:28:46.307102  313419 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:28:46.310080  313419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:28:46.310115  313419 node_conditions.go:123] node cpu capacity is 8
	I1124 09:28:46.310131  313419 node_conditions.go:105] duration metric: took 3.02432ms to run NodePressure ...
	I1124 09:28:46.310142  313419 start.go:242] waiting for startup goroutines ...
	I1124 09:28:46.310156  313419 start.go:247] waiting for cluster config update ...
	I1124 09:28:46.310168  313419 start.go:256] writing updated cluster config ...
	I1124 09:28:46.310430  313419 ssh_runner.go:195] Run: rm -f paused
	I1124 09:28:46.314765  313419 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:28:46.318434  313419 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ll2c4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:46.322591  313419 pod_ready.go:94] pod "coredns-7d764666f9-ll2c4" is "Ready"
	I1124 09:28:46.322609  313419 pod_ready.go:86] duration metric: took 4.157286ms for pod "coredns-7d764666f9-ll2c4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:46.324553  313419 pod_ready.go:83] waiting for pod "etcd-no-preload-938348" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:46.328021  313419 pod_ready.go:94] pod "etcd-no-preload-938348" is "Ready"
	I1124 09:28:46.328037  313419 pod_ready.go:86] duration metric: took 3.465624ms for pod "etcd-no-preload-938348" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:46.329768  313419 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-938348" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:46.835543  313419 pod_ready.go:94] pod "kube-apiserver-no-preload-938348" is "Ready"
	I1124 09:28:46.835571  313419 pod_ready.go:86] duration metric: took 505.784451ms for pod "kube-apiserver-no-preload-938348" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:46.837609  313419 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-938348" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:47.119944  313419 pod_ready.go:94] pod "kube-controller-manager-no-preload-938348" is "Ready"
	I1124 09:28:47.119968  313419 pod_ready.go:86] duration metric: took 282.33955ms for pod "kube-controller-manager-no-preload-938348" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:47.319579  313419 pod_ready.go:83] waiting for pod "kube-proxy-smqgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:47.719806  313419 pod_ready.go:94] pod "kube-proxy-smqgp" is "Ready"
	I1124 09:28:47.719832  313419 pod_ready.go:86] duration metric: took 400.232284ms for pod "kube-proxy-smqgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:47.918954  313419 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-938348" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:48.319273  313419 pod_ready.go:94] pod "kube-scheduler-no-preload-938348" is "Ready"
	I1124 09:28:48.319297  313419 pod_ready.go:86] duration metric: took 400.322041ms for pod "kube-scheduler-no-preload-938348" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:28:48.319308  313419 pod_ready.go:40] duration metric: took 2.004517547s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:28:48.363699  313419 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1124 09:28:48.365855  313419 out.go:179] * Done! kubectl is now configured to use "no-preload-938348" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:28:37 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:37.734882052Z" level=info msg="Starting container: fe7479bb8504891e483fc84f53d3027ee6aa7798482c8bc1ec72941ef3f666ae" id=8214e04f-ec4c-4ba6-9a99-d79f69532a18 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:28:37 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:37.736972282Z" level=info msg="Started container" PID=2143 containerID=fe7479bb8504891e483fc84f53d3027ee6aa7798482c8bc1ec72941ef3f666ae description=kube-system/coredns-5dd5756b68-gmgwv/coredns id=8214e04f-ec4c-4ba6-9a99-d79f69532a18 name=/runtime.v1.RuntimeService/StartContainer sandboxID=000363a63f9ab803499fe7281fb533fe4aa09459aa7df6ef46c45c4bacc33879
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.649098573Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f1e98b84-308d-45f8-a416-0e54c68e156a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.649186507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.654814194Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:af43b26aabb607e4ab4069ada6ff299789b8917410a10f5f966e6dc527a8adc0 UID:2e8d6e38-9822-430d-b775-977600e48262 NetNS:/var/run/netns/95f80f9a-16cc-428a-8bc5-05a53ba10e69 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000a343b8}] Aliases:map[]}"
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.654845846Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.66579049Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:af43b26aabb607e4ab4069ada6ff299789b8917410a10f5f966e6dc527a8adc0 UID:2e8d6e38-9822-430d-b775-977600e48262 NetNS:/var/run/netns/95f80f9a-16cc-428a-8bc5-05a53ba10e69 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000a343b8}] Aliases:map[]}"
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.665990307Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.667109512Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.668260038Z" level=info msg="Ran pod sandbox af43b26aabb607e4ab4069ada6ff299789b8917410a10f5f966e6dc527a8adc0 with infra container: default/busybox/POD" id=f1e98b84-308d-45f8-a416-0e54c68e156a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.670443281Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2e132a05-8661-4321-8e33-323d5fd1ea76 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.670642821Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2e132a05-8661-4321-8e33-323d5fd1ea76 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.670689239Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2e132a05-8661-4321-8e33-323d5fd1ea76 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.671186129Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2e518b0d-21b0-4209-827d-879bb8bef552 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:28:40 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:40.672818199Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:28:41 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:41.92498983Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2e518b0d-21b0-4209-827d-879bb8bef552 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:28:41 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:41.92585347Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=723d3a46-abb3-4c16-b146-e228e7738673 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:28:41 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:41.92717446Z" level=info msg="Creating container: default/busybox/busybox" id=78ffce84-9503-4a25-a857-7d641c83c669 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:28:41 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:41.927294877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:28:41 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:41.931057874Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:28:41 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:41.931586858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:28:41 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:41.969636247Z" level=info msg="Created container 904494263b83a92067d853e0bd3646ff8f4749cef38ce5a02074548da4501e2e: default/busybox/busybox" id=78ffce84-9503-4a25-a857-7d641c83c669 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:28:41 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:41.970195128Z" level=info msg="Starting container: 904494263b83a92067d853e0bd3646ff8f4749cef38ce5a02074548da4501e2e" id=54447cbd-a441-4b17-aabf-a7d2f29dbcaa name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:28:41 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:41.971871098Z" level=info msg="Started container" PID=2220 containerID=904494263b83a92067d853e0bd3646ff8f4749cef38ce5a02074548da4501e2e description=default/busybox/busybox id=54447cbd-a441-4b17-aabf-a7d2f29dbcaa name=/runtime.v1.RuntimeService/StartContainer sandboxID=af43b26aabb607e4ab4069ada6ff299789b8917410a10f5f966e6dc527a8adc0
	Nov 24 09:28:49 old-k8s-version-767267 crio[770]: time="2025-11-24T09:28:49.132525729Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	904494263b83a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   af43b26aabb60       busybox                                          default
	fe7479bb85048       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   000363a63f9ab       coredns-5dd5756b68-gmgwv                         kube-system
	4640d8cbeb3ef       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   43fdbaf280598       storage-provisioner                              kube-system
	c95a63cede9b0       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   f2d4d95fa948e       kindnet-8tdrm                                    kube-system
	fcdfd40ae9162       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   51096c9646c17       kube-proxy-b8kgc                                 kube-system
	b2c34c403e68f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   ad735acaf4c23       kube-apiserver-old-k8s-version-767267            kube-system
	f0627ffed6961       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   6af045eae821f       kube-scheduler-old-k8s-version-767267            kube-system
	0897e054438e6       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   7fadb9f6189a7       kube-controller-manager-old-k8s-version-767267   kube-system
	7a7af79af1180       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   7e4f4be544c9d       etcd-old-k8s-version-767267                      kube-system
	
	
	==> coredns [fe7479bb8504891e483fc84f53d3027ee6aa7798482c8bc1ec72941ef3f666ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52624 - 38496 "HINFO IN 8146367765893864925.1243331106081340922. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.424579869s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-767267
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-767267
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=old-k8s-version-767267
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_28_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:28:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-767267
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:28:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:28:42 +0000   Mon, 24 Nov 2025 09:28:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:28:42 +0000   Mon, 24 Nov 2025 09:28:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:28:42 +0000   Mon, 24 Nov 2025 09:28:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:28:42 +0000   Mon, 24 Nov 2025 09:28:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-767267
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                f1cd3fa8-d2f0-4c2f-8873-1620b1eea27a
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-gmgwv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-767267                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-8tdrm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-767267             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-767267    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-b8kgc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-767267             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node old-k8s-version-767267 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet          Node old-k8s-version-767267 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-767267 event: Registered Node old-k8s-version-767267 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-767267 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [7a7af79af1180bc50f8f851b526d55d448833cc3e98d2900804b90851f04612e] <==
	{"level":"info","ts":"2025-11-24T09:28:07.950832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-24T09:28:07.95175Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-767267 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T09:28:07.951794Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T09:28:07.951786Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T09:28:07.951922Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:28:07.952103Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T09:28:07.952182Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T09:28:07.952631Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:28:07.952725Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:28:07.952748Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:28:07.95316Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T09:28:07.953162Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-24T09:28:24.019778Z","caller":"traceutil/trace.go:171","msg":"trace[657043614] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"113.542321ms","start":"2025-11-24T09:28:23.906001Z","end":"2025-11-24T09:28:24.019543Z","steps":["trace[657043614] 'process raft request'  (duration: 107.860652ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:28:24.020951Z","caller":"traceutil/trace.go:171","msg":"trace[1183840739] transaction","detail":"{read_only:false; response_revision:324; number_of_response:1; }","duration":"104.570431ms","start":"2025-11-24T09:28:23.916171Z","end":"2025-11-24T09:28:24.020741Z","steps":["trace[1183840739] 'process raft request'  (duration: 103.738469ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:28:24.263003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.797708ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356859094600988 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/configmaps/kube-public/cluster-info\" mod_revision:261 > success:<request_put:<key:\"/registry/configmaps/kube-public/cluster-info\" value_size:2135 >> failure:<request_range:<key:\"/registry/configmaps/kube-public/cluster-info\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:28:24.263099Z","caller":"traceutil/trace.go:171","msg":"trace[1598412092] linearizableReadLoop","detail":"{readStateIndex:337; appliedIndex:336; }","duration":"225.202127ms","start":"2025-11-24T09:28:24.037883Z","end":"2025-11-24T09:28:24.263085Z","steps":["trace[1598412092] 'read index received'  (duration: 97.693632ms)","trace[1598412092] 'applied index is now lower than readState.Index'  (duration: 127.507582ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:28:24.263135Z","caller":"traceutil/trace.go:171","msg":"trace[1885778012] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"234.036356ms","start":"2025-11-24T09:28:24.029072Z","end":"2025-11-24T09:28:24.263108Z","steps":["trace[1885778012] 'process raft request'  (duration: 106.558472ms)","trace[1885778012] 'compare'  (duration: 126.657339ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:28:24.263204Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.347414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-11-24T09:28:24.263233Z","caller":"traceutil/trace.go:171","msg":"trace[657207335] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:325; }","duration":"225.379749ms","start":"2025-11-24T09:28:24.037842Z","end":"2025-11-24T09:28:24.263222Z","steps":["trace[657207335] 'agreement among raft nodes before linearized reading'  (duration: 225.312337ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:28:24.263256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.511403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T09:28:24.263295Z","caller":"traceutil/trace.go:171","msg":"trace[1235081851] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:325; }","duration":"187.557591ms","start":"2025-11-24T09:28:24.075725Z","end":"2025-11-24T09:28:24.263283Z","steps":["trace[1235081851] 'agreement among raft nodes before linearized reading'  (duration: 187.479983ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:28:24.263256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.359976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-11-24T09:28:24.263357Z","caller":"traceutil/trace.go:171","msg":"trace[570145344] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:325; }","duration":"125.442415ms","start":"2025-11-24T09:28:24.137887Z","end":"2025-11-24T09:28:24.26333Z","steps":["trace[570145344] 'agreement among raft nodes before linearized reading'  (duration: 125.333667ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:28:24.263402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.82074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" ","response":"range_response_count:1 size:234"}
	{"level":"info","ts":"2025-11-24T09:28:24.263434Z","caller":"traceutil/trace.go:171","msg":"trace[277950882] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:325; }","duration":"174.855181ms","start":"2025-11-24T09:28:24.088567Z","end":"2025-11-24T09:28:24.263423Z","steps":["trace[277950882] 'agreement among raft nodes before linearized reading'  (duration: 174.698309ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:28:50 up  1:11,  0 user,  load average: 3.82, 3.19, 2.07
	Linux old-k8s-version-767267 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c95a63cede9b07652b54408483f556bbde6a21a4813bc86b9f5376a395c462d4] <==
	I1124 09:28:27.058185       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:28:27.058592       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 09:28:27.058782       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:28:27.058808       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:28:27.058843       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:28:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:28:27.267010       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:28:27.357593       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:28:27.357627       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:28:27.357799       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:28:27.658031       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:28:27.658141       1 metrics.go:72] Registering metrics
	I1124 09:28:27.658232       1 controller.go:711] "Syncing nftables rules"
	I1124 09:28:37.274010       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:28:37.274058       1 main.go:301] handling current node
	I1124 09:28:47.267376       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:28:47.267413       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b2c34c403e68f38214507295d34957fe67dc890e9980e2a78317ef1f13ecd487] <==
	I1124 09:28:09.119524       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 09:28:09.122285       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 09:28:09.122411       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 09:28:09.141142       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 09:28:09.141355       1 aggregator.go:166] initial CRD sync complete...
	I1124 09:28:09.141415       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 09:28:09.141446       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:28:09.141482       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:28:09.144506       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 09:28:09.165811       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:28:10.021316       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:28:10.025051       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:28:10.025071       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:28:10.479417       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:28:10.518725       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:28:10.626899       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:28:10.632433       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 09:28:10.633554       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 09:28:10.638801       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:28:11.083888       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 09:28:11.987114       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 09:28:12.001266       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:28:12.010562       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 09:28:24.594860       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:28:24.744293       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0897e054438e663bbed6f705651754dda8aa086656764bc07a5aa422bf4d80e8] <==
	I1124 09:28:24.091178       1 shared_informer.go:318] Caches are synced for persistent volume
	I1124 09:28:24.092441       1 shared_informer.go:318] Caches are synced for ephemeral
	I1124 09:28:24.145054       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 09:28:24.460824       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 09:28:24.535708       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 09:28:24.535742       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 09:28:24.604298       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-b8kgc"
	I1124 09:28:24.604327       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8tdrm"
	I1124 09:28:24.746981       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 09:28:25.015669       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gmgwv"
	I1124 09:28:25.025812       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8khwp"
	I1124 09:28:25.039592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="292.673594ms"
	I1124 09:28:25.050067       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.410595ms"
	I1124 09:28:25.050213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.353µs"
	I1124 09:28:25.646748       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 09:28:25.678179       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-8khwp"
	I1124 09:28:25.694733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.398632ms"
	I1124 09:28:25.713628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.831588ms"
	I1124 09:28:25.713773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.233µs"
	I1124 09:28:25.713837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="36.255µs"
	I1124 09:28:37.356873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.809µs"
	I1124 09:28:37.367524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.218µs"
	I1124 09:28:38.179907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.968342ms"
	I1124 09:28:38.180013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.512µs"
	I1124 09:28:38.917053       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [fcdfd40ae91620aa5b0a8959adb24f05ddc8860203075d96e7011e44984d0813] <==
	I1124 09:28:25.173633       1 server_others.go:69] "Using iptables proxy"
	I1124 09:28:25.186715       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1124 09:28:25.215269       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:28:25.218948       1 server_others.go:152] "Using iptables Proxier"
	I1124 09:28:25.218998       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 09:28:25.219008       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 09:28:25.219059       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 09:28:25.219545       1 server.go:846] "Version info" version="v1.28.0"
	I1124 09:28:25.219568       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:28:25.220249       1 config.go:188] "Starting service config controller"
	I1124 09:28:25.220280       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 09:28:25.220306       1 config.go:315] "Starting node config controller"
	I1124 09:28:25.220310       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 09:28:25.226596       1 config.go:97] "Starting endpoint slice config controller"
	I1124 09:28:25.226691       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 09:28:25.226725       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 09:28:25.321106       1 shared_informer.go:318] Caches are synced for node config
	I1124 09:28:25.322233       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [f0627ffed6961059ec903809a679ed9c1908a4d021e24ee7094a3ee982303769] <==
	W1124 09:28:09.102128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 09:28:09.102189       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 09:28:09.102233       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 09:28:09.102253       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 09:28:09.906102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1124 09:28:09.906141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1124 09:28:09.924798       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1124 09:28:09.924831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1124 09:28:09.947290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 09:28:09.947328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 09:28:09.950513       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 09:28:09.950545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 09:28:09.951630       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 09:28:09.951659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 09:28:09.978307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1124 09:28:09.978347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1124 09:28:10.023241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 09:28:10.023287       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1124 09:28:10.140902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 09:28:10.140948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 09:28:10.284500       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 09:28:10.284544       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:28:10.295328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 09:28:10.295413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1124 09:28:11.992405       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.030570    1382 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.031924    1382 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.611130    1382 topology_manager.go:215] "Topology Admit Handler" podUID="318115cc-de22-4a55-a7aa-2acc886827d8" podNamespace="kube-system" podName="kube-proxy-b8kgc"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.611296    1382 topology_manager.go:215] "Topology Admit Handler" podUID="de72ff2b-7361-460c-b1e8-288fb9a6eb03" podNamespace="kube-system" podName="kindnet-8tdrm"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.629861    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/318115cc-de22-4a55-a7aa-2acc886827d8-kube-proxy\") pod \"kube-proxy-b8kgc\" (UID: \"318115cc-de22-4a55-a7aa-2acc886827d8\") " pod="kube-system/kube-proxy-b8kgc"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.629925    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjr5\" (UniqueName: \"kubernetes.io/projected/de72ff2b-7361-460c-b1e8-288fb9a6eb03-kube-api-access-jrjr5\") pod \"kindnet-8tdrm\" (UID: \"de72ff2b-7361-460c-b1e8-288fb9a6eb03\") " pod="kube-system/kindnet-8tdrm"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.629958    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/318115cc-de22-4a55-a7aa-2acc886827d8-xtables-lock\") pod \"kube-proxy-b8kgc\" (UID: \"318115cc-de22-4a55-a7aa-2acc886827d8\") " pod="kube-system/kube-proxy-b8kgc"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.630039    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpgz7\" (UniqueName: \"kubernetes.io/projected/318115cc-de22-4a55-a7aa-2acc886827d8-kube-api-access-kpgz7\") pod \"kube-proxy-b8kgc\" (UID: \"318115cc-de22-4a55-a7aa-2acc886827d8\") " pod="kube-system/kube-proxy-b8kgc"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.630092    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/318115cc-de22-4a55-a7aa-2acc886827d8-lib-modules\") pod \"kube-proxy-b8kgc\" (UID: \"318115cc-de22-4a55-a7aa-2acc886827d8\") " pod="kube-system/kube-proxy-b8kgc"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.630130    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/de72ff2b-7361-460c-b1e8-288fb9a6eb03-cni-cfg\") pod \"kindnet-8tdrm\" (UID: \"de72ff2b-7361-460c-b1e8-288fb9a6eb03\") " pod="kube-system/kindnet-8tdrm"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.630159    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de72ff2b-7361-460c-b1e8-288fb9a6eb03-xtables-lock\") pod \"kindnet-8tdrm\" (UID: \"de72ff2b-7361-460c-b1e8-288fb9a6eb03\") " pod="kube-system/kindnet-8tdrm"
	Nov 24 09:28:24 old-k8s-version-767267 kubelet[1382]: I1124 09:28:24.630184    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de72ff2b-7361-460c-b1e8-288fb9a6eb03-lib-modules\") pod \"kindnet-8tdrm\" (UID: \"de72ff2b-7361-460c-b1e8-288fb9a6eb03\") " pod="kube-system/kindnet-8tdrm"
	Nov 24 09:28:26 old-k8s-version-767267 kubelet[1382]: I1124 09:28:26.784272    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-b8kgc" podStartSLOduration=2.784214294 podCreationTimestamp="2025-11-24 09:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:28:25.13989803 +0000 UTC m=+13.178713799" watchObservedRunningTime="2025-11-24 09:28:26.784214294 +0000 UTC m=+14.823030062"
	Nov 24 09:28:37 old-k8s-version-767267 kubelet[1382]: I1124 09:28:37.334974    1382 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 09:28:37 old-k8s-version-767267 kubelet[1382]: I1124 09:28:37.356852    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-8tdrm" podStartSLOduration=11.645468236 podCreationTimestamp="2025-11-24 09:28:24 +0000 UTC" firstStartedPulling="2025-11-24 09:28:25.021638335 +0000 UTC m=+13.060454101" lastFinishedPulling="2025-11-24 09:28:26.732961934 +0000 UTC m=+14.771777698" observedRunningTime="2025-11-24 09:28:27.148481045 +0000 UTC m=+15.187296814" watchObservedRunningTime="2025-11-24 09:28:37.356791833 +0000 UTC m=+25.395607601"
	Nov 24 09:28:37 old-k8s-version-767267 kubelet[1382]: I1124 09:28:37.357213    1382 topology_manager.go:215] "Topology Admit Handler" podUID="fa53b4e5-62ed-42ac-82be-5f220cd9ab0a" podNamespace="kube-system" podName="coredns-5dd5756b68-gmgwv"
	Nov 24 09:28:37 old-k8s-version-767267 kubelet[1382]: I1124 09:28:37.358553    1382 topology_manager.go:215] "Topology Admit Handler" podUID="6347c3c7-cb5b-42ab-abb8-9ca37af285b5" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 09:28:37 old-k8s-version-767267 kubelet[1382]: I1124 09:28:37.526902    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6347c3c7-cb5b-42ab-abb8-9ca37af285b5-tmp\") pod \"storage-provisioner\" (UID: \"6347c3c7-cb5b-42ab-abb8-9ca37af285b5\") " pod="kube-system/storage-provisioner"
	Nov 24 09:28:37 old-k8s-version-767267 kubelet[1382]: I1124 09:28:37.526974    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa53b4e5-62ed-42ac-82be-5f220cd9ab0a-config-volume\") pod \"coredns-5dd5756b68-gmgwv\" (UID: \"fa53b4e5-62ed-42ac-82be-5f220cd9ab0a\") " pod="kube-system/coredns-5dd5756b68-gmgwv"
	Nov 24 09:28:37 old-k8s-version-767267 kubelet[1382]: I1124 09:28:37.527140    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r949\" (UniqueName: \"kubernetes.io/projected/6347c3c7-cb5b-42ab-abb8-9ca37af285b5-kube-api-access-7r949\") pod \"storage-provisioner\" (UID: \"6347c3c7-cb5b-42ab-abb8-9ca37af285b5\") " pod="kube-system/storage-provisioner"
	Nov 24 09:28:37 old-k8s-version-767267 kubelet[1382]: I1124 09:28:37.527180    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dv4r\" (UniqueName: \"kubernetes.io/projected/fa53b4e5-62ed-42ac-82be-5f220cd9ab0a-kube-api-access-9dv4r\") pod \"coredns-5dd5756b68-gmgwv\" (UID: \"fa53b4e5-62ed-42ac-82be-5f220cd9ab0a\") " pod="kube-system/coredns-5dd5756b68-gmgwv"
	Nov 24 09:28:38 old-k8s-version-767267 kubelet[1382]: I1124 09:28:38.162366    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.16228888 podCreationTimestamp="2025-11-24 09:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:28:38.162032466 +0000 UTC m=+26.200848251" watchObservedRunningTime="2025-11-24 09:28:38.16228888 +0000 UTC m=+26.201104649"
	Nov 24 09:28:40 old-k8s-version-767267 kubelet[1382]: I1124 09:28:40.046563    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gmgwv" podStartSLOduration=16.046505675 podCreationTimestamp="2025-11-24 09:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:28:38.172495243 +0000 UTC m=+26.211311011" watchObservedRunningTime="2025-11-24 09:28:40.046505675 +0000 UTC m=+28.085321508"
	Nov 24 09:28:40 old-k8s-version-767267 kubelet[1382]: I1124 09:28:40.047495    1382 topology_manager.go:215] "Topology Admit Handler" podUID="2e8d6e38-9822-430d-b775-977600e48262" podNamespace="default" podName="busybox"
	Nov 24 09:28:40 old-k8s-version-767267 kubelet[1382]: I1124 09:28:40.241268    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g68mg\" (UniqueName: \"kubernetes.io/projected/2e8d6e38-9822-430d-b775-977600e48262-kube-api-access-g68mg\") pod \"busybox\" (UID: \"2e8d6e38-9822-430d-b775-977600e48262\") " pod="default/busybox"
	
	
	==> storage-provisioner [4640d8cbeb3ef2392596ab5bbdd3ec989e0505573c920148de56eea9b6ba77a1] <==
	I1124 09:28:37.715230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:28:37.723948       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:28:37.724007       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 09:28:37.785120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:28:37.785240       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e485f296-a460-439b-80f5-d911ee8d6a0d", APIVersion:"v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-767267_57a0c724-5b9f-4490-9ed4-ea74d8375096 became leader
	I1124 09:28:37.785395       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-767267_57a0c724-5b9f-4490-9ed4-ea74d8375096!
	I1124 09:28:37.886103       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-767267_57a0c724-5b9f-4490-9ed4-ea74d8375096!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-767267 -n old-k8s-version-767267
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-767267 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-938348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-938348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (495.470876ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:28:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-938348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-938348 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-938348 describe deploy/metrics-server -n kube-system: exit status 1 (79.042624ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-938348 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-938348
helpers_test.go:243: (dbg) docker inspect no-preload-938348:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761",
	        "Created": "2025-11-24T09:28:01.464607298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314083,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:28:01.4978229Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/hosts",
	        "LogPath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761-json.log",
	        "Name": "/no-preload-938348",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-938348:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-938348",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761",
	                "LowerDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-938348",
	                "Source": "/var/lib/docker/volumes/no-preload-938348/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-938348",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-938348",
	                "name.minikube.sigs.k8s.io": "no-preload-938348",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "66e43ecbd7ddf1015219a7bb1eef9c82907e97470bf29a95fbd447cdc26cb107",
	            "SandboxKey": "/var/run/docker/netns/66e43ecbd7dd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-938348": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3f03f3b5e2bfb0cd68097788ad47d94eb14c12cf815ca0f14753094201a5fac2",
	                    "EndpointID": "dcbd397e15d2c7e2420e5d9125f86f61345621c1677fc40b5ef036193b392ed2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "1a:43:a6:87:c2:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-938348",
	                        "c1c5f9bb92d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-938348 -n no-preload-938348
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-938348 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-938348 logs -n 25: (1.635823837s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-949664 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo docker system info                                                                                                                                 │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cri-dockerd --version                                                                                                                              │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-767267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo containerd config dump                                                                                                                             │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo crio config                                                                                                                                        │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ delete  │ -p bridge-949664                                                                                                                                                         │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ stop    │ -p old-k8s-version-767267 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-938348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:28:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:28:54.079133  326387 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:28:54.079399  326387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:28:54.079409  326387 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:54.079414  326387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:28:54.079639  326387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:28:54.080093  326387 out.go:368] Setting JSON to false
	I1124 09:28:54.081252  326387 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4280,"bootTime":1763972254,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:28:54.081302  326387 start.go:143] virtualization: kvm guest
	I1124 09:28:54.083619  326387 out.go:179] * [default-k8s-diff-port-164377] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:28:54.085055  326387 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:28:54.085067  326387 notify.go:221] Checking for updates...
	I1124 09:28:54.087667  326387 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:28:54.088888  326387 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:28:54.090087  326387 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:28:54.091202  326387 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:28:54.092364  326387 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:28:54.093834  326387 config.go:182] Loaded profile config "kubernetes-upgrade-967467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:28:54.093923  326387 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:28:54.094021  326387 config.go:182] Loaded profile config "old-k8s-version-767267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 09:28:54.094116  326387 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:28:54.117760  326387 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:28:54.117892  326387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:28:54.176757  326387 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-24 09:28:54.165972202 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:28:54.176891  326387 docker.go:319] overlay module found
	I1124 09:28:54.178914  326387 out.go:179] * Using the docker driver based on user configuration
	I1124 09:28:54.180122  326387 start.go:309] selected driver: docker
	I1124 09:28:54.180138  326387 start.go:927] validating driver "docker" against <nil>
	I1124 09:28:54.180149  326387 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:28:54.180756  326387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:28:54.239863  326387 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-24 09:28:54.229790893 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:28:54.240020  326387 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:28:54.240246  326387 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:28:54.241812  326387 out.go:179] * Using Docker driver with root privileges
	I1124 09:28:54.242953  326387 cni.go:84] Creating CNI manager for ""
	I1124 09:28:54.243023  326387 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:28:54.243036  326387 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:28:54.243113  326387 start.go:353] cluster config:
	{Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:28:54.244352  326387 out.go:179] * Starting "default-k8s-diff-port-164377" primary control-plane node in "default-k8s-diff-port-164377" cluster
	I1124 09:28:54.245483  326387 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:28:54.246551  326387 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:28:54.247564  326387 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:28:54.247595  326387 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:28:54.247612  326387 cache.go:65] Caching tarball of preloaded images
	I1124 09:28:54.247654  326387 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:28:54.247691  326387 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:28:54.247701  326387 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:28:54.247790  326387 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/config.json ...
	I1124 09:28:54.247824  326387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/config.json: {Name:mk96dbb65063f61abb836d520b0f04d82423a18a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:28:54.268551  326387 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:28:54.268569  326387 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:28:54.268583  326387 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:28:54.268613  326387 start.go:360] acquireMachinesLock for default-k8s-diff-port-164377: {Name:mkd718f87c8feaecdc5abdde6ac9abecef458b31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:28:54.268702  326387 start.go:364] duration metric: took 74.668µs to acquireMachinesLock for "default-k8s-diff-port-164377"
	I1124 09:28:54.268724  326387 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:28:54.268791  326387 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:28:52.097989  255979 logs.go:123] Gathering logs for dmesg ...
	I1124 09:28:52.098019  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:28:52.113179  255979 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:28:52.113202  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:28:52.168865  255979 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:28:52.168893  255979 logs.go:123] Gathering logs for etcd [bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c] ...
	I1124 09:28:52.168942  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bc5951000da3f07bce3ba7546e20dd300631b6514d2c97134c2e967a2c665e8c"
	I1124 09:28:52.203046  255979 logs.go:123] Gathering logs for kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433] ...
	I1124 09:28:52.203071  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433"
	W1124 09:28:52.228306  255979 logs.go:138] Found kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433] problem: E1124 09:27:59.715466       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1124 09:28:52.228327  255979 logs.go:123] Gathering logs for kube-scheduler [8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe] ...
	I1124 09:28:52.228355  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b086daa0c5968e2c0fba021b6d02403c8ca0c41b7734be97d4c7e48ad7fb8fe"
	I1124 09:28:52.299595  255979 logs.go:123] Gathering logs for kube-controller-manager [233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564] ...
	I1124 09:28:52.299629  255979 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 233678c2ca1d918d0138484ead593da5f93bb0a14e362254ee87521b9ec02564"
	I1124 09:28:52.326458  255979 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:52.326481  255979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1124 09:28:52.326530  255979 out.go:285] X Problems detected in kube-scheduler [fc4b387725a161b50e96ff3e3fba41824daaa399531757f84b78605d38ac5433]:
	W1124 09:28:52.326546  255979 out.go:285]   E1124 09:27:59.715466       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1124 09:28:52.326552  255979 out.go:374] Setting ErrFile to fd 2...
	I1124 09:28:52.326559  255979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	
	
	==> CRI-O <==
	Nov 24 09:28:46 no-preload-938348 crio[773]: time="2025-11-24T09:28:46.128535898Z" level=info msg="Starting container: ddb714a0436daae55778287c43b5c03a8b843dbc98050c01c26ce6e3fdba61dc" id=401c71ce-f1d8-49ed-b19d-a126a4d4cd73 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:28:46 no-preload-938348 crio[773]: time="2025-11-24T09:28:46.130851801Z" level=info msg="Started container" PID=3159 containerID=ddb714a0436daae55778287c43b5c03a8b843dbc98050c01c26ce6e3fdba61dc description=kube-system/coredns-7d764666f9-ll2c4/coredns id=401c71ce-f1d8-49ed-b19d-a126a4d4cd73 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a5d7c6474c067696cadee14e699710a7fa9cde0bc6ebd2ae2be59726864fbab
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.827744127Z" level=info msg="Running pod sandbox: default/busybox/POD" id=faf6f687-c68d-458b-9198-e4f41600ad5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.827812629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.832580223Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cfcc4fbf45f6de31dfedc91b05d1a90462ae8791f205138de7bf151ce5620c69 UID:4f8a9222-5610-494c-8cd8-a464fdacd234 NetNS:/var/run/netns/bff907cf-ec50-4d4c-b3b7-6fb919ec5a90 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000416c18}] Aliases:map[]}"
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.832614935Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.842489699Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cfcc4fbf45f6de31dfedc91b05d1a90462ae8791f205138de7bf151ce5620c69 UID:4f8a9222-5610-494c-8cd8-a464fdacd234 NetNS:/var/run/netns/bff907cf-ec50-4d4c-b3b7-6fb919ec5a90 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000416c18}] Aliases:map[]}"
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.842630581Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.843502501Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.844610457Z" level=info msg="Ran pod sandbox cfcc4fbf45f6de31dfedc91b05d1a90462ae8791f205138de7bf151ce5620c69 with infra container: default/busybox/POD" id=faf6f687-c68d-458b-9198-e4f41600ad5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.845881735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=169f6f59-4abc-4a0c-a44c-bdb2d9360e48 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.84599989Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=169f6f59-4abc-4a0c-a44c-bdb2d9360e48 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.846041817Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=169f6f59-4abc-4a0c-a44c-bdb2d9360e48 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.8468708Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=587bcffd-faf9-4087-b613-f308a9f16cd6 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:28:48 no-preload-938348 crio[773]: time="2025-11-24T09:28:48.848274675Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:28:50 no-preload-938348 crio[773]: time="2025-11-24T09:28:50.044752627Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=587bcffd-faf9-4087-b613-f308a9f16cd6 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:28:50 no-preload-938348 crio[773]: time="2025-11-24T09:28:50.045486829Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=529e3b65-cbea-461d-929b-59eb3b3b7a19 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:28:50 no-preload-938348 crio[773]: time="2025-11-24T09:28:50.047510284Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ec983b32-6a39-46d9-800a-29e86edf6a38 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:28:50 no-preload-938348 crio[773]: time="2025-11-24T09:28:50.050959343Z" level=info msg="Creating container: default/busybox/busybox" id=07523ce6-99fe-413e-940b-826b5996e705 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:28:50 no-preload-938348 crio[773]: time="2025-11-24T09:28:50.051084532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:28:50 no-preload-938348 crio[773]: time="2025-11-24T09:28:50.055968995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:28:50 no-preload-938348 crio[773]: time="2025-11-24T09:28:50.056535708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:28:50 no-preload-938348 crio[773]: time="2025-11-24T09:28:50.095931545Z" level=info msg="Created container 06f9224ecf35317864b347b6c377b8c6070b6dc0f3d7c98b766836f4d6370cca: default/busybox/busybox" id=07523ce6-99fe-413e-940b-826b5996e705 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:28:50 no-preload-938348 crio[773]: time="2025-11-24T09:28:50.096719653Z" level=info msg="Starting container: 06f9224ecf35317864b347b6c377b8c6070b6dc0f3d7c98b766836f4d6370cca" id=93114b1b-bb0d-46a5-a647-938fc57125b0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:28:50 no-preload-938348 crio[773]: time="2025-11-24T09:28:50.098963514Z" level=info msg="Started container" PID=3232 containerID=06f9224ecf35317864b347b6c377b8c6070b6dc0f3d7c98b766836f4d6370cca description=default/busybox/busybox id=93114b1b-bb0d-46a5-a647-938fc57125b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfcc4fbf45f6de31dfedc91b05d1a90462ae8791f205138de7bf151ce5620c69
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	06f9224ecf353       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   cfcc4fbf45f6d       busybox                                     default
	ddb714a0436da       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   2a5d7c6474c06       coredns-7d764666f9-ll2c4                    kube-system
	05933f92aad85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   b9f102de7bbb5       storage-provisioner                         kube-system
	d406fd2a56673       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   73a8cb17a7d07       kindnet-zrnnf                               kube-system
	4f1fb2a4494d2       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      24 seconds ago      Running             kube-proxy                0                   5f6816d652484       kube-proxy-smqgp                            kube-system
	1d426389272f7       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   f76c264eb3ec8       kube-apiserver-no-preload-938348            kube-system
	738f2f0db4071       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   21051f5cc0e43       etcd-no-preload-938348                      kube-system
	1ba19a39f4c3b       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   c3a0ec1522d2d       kube-scheduler-no-preload-938348            kube-system
	2f0fdb7bbd3db       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   bf16f6877702a       kube-controller-manager-no-preload-938348   kube-system
	
	
	==> coredns [ddb714a0436daae55778287c43b5c03a8b843dbc98050c01c26ce6e3fdba61dc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35144 - 23565 "HINFO IN 859310315379989527.7614686349011005942. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.154897118s
	
	
	==> describe nodes <==
	Name:               no-preload-938348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-938348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=no-preload-938348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_28_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:28:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-938348
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:28:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:28:45 +0000   Mon, 24 Nov 2025 09:28:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:28:45 +0000   Mon, 24 Nov 2025 09:28:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:28:45 +0000   Mon, 24 Nov 2025 09:28:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:28:45 +0000   Mon, 24 Nov 2025 09:28:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-938348
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                5a3d5c1e-1c49-4ac3-aca7-a3f8db3c500c
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-ll2c4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-938348                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-zrnnf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-938348             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-938348    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-smqgp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-938348             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node no-preload-938348 event: Registered Node no-preload-938348 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [738f2f0db40717fc5e71fd7ae6fb5298932ee80b39d5f552dc64798a2d3a1ece] <==
	{"level":"warn","ts":"2025-11-24T09:28:24.658757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.665408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.673234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.679682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.686487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.694550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.702478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.713660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.721306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.727802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.738461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.753724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.760935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.768824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.775507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.782947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.789366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.796835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.804388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.811053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.822564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.829813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.836838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:28:24.844664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41032","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T09:28:32.207627Z","caller":"traceutil/trace.go:172","msg":"trace[1698078879] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"103.370272ms","start":"2025-11-24T09:28:32.104232Z","end":"2025-11-24T09:28:32.207602Z","steps":["trace[1698078879] 'process raft request'  (duration: 81.888098ms)","trace[1698078879] 'compare'  (duration: 21.376991ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:28:58 up  1:11,  0 user,  load average: 3.59, 3.16, 2.07
	Linux no-preload-938348 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d406fd2a566732dc9df630f66e821b7f05ecfd8bf0600258609ef6b1d4531ed8] <==
	I1124 09:28:35.328815       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:28:35.329118       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 09:28:35.329289       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:28:35.329310       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:28:35.329369       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:28:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:28:35.626230       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:28:35.725854       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:28:35.725874       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:28:35.726926       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:28:36.126934       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:28:36.126965       1 metrics.go:72] Registering metrics
	I1124 09:28:36.127044       1 controller.go:711] "Syncing nftables rules"
	I1124 09:28:45.538786       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:28:45.538854       1 main.go:301] handling current node
	I1124 09:28:55.541702       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:28:55.541746       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d426389272f7b9d1f5a3a805b115f567a59374a3b67725b3381ecf72c3c6b1a] <==
	I1124 09:28:25.528014       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:28:25.528932       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 09:28:25.528971       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1124 09:28:25.540956       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:28:25.542005       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1124 09:28:25.545017       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:28:25.552281       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:28:26.427062       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1124 09:28:26.432236       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:28:26.432257       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1124 09:28:26.969948       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:28:27.024743       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:28:27.136692       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:28:27.144773       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 09:28:27.146162       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:28:27.152105       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:28:27.468472       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:28:28.271959       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:28:28.287361       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:28:28.297551       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:28:33.070243       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:28:33.074805       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:28:33.369313       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:28:33.418239       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 09:28:56.616266       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:36980: use of closed network connection
	
	
	==> kube-controller-manager [2f0fdb7bbd3db6066c8de57cce576f36d500f5754e0aaae47371e453e85b42e6] <==
	I1124 09:28:32.324517       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.324542       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1124 09:28:32.324604       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-938348"
	I1124 09:28:32.324692       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1124 09:28:32.325744       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.325767       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.325779       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.325834       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.325923       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.325932       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.325984       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.325993       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.326093       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.326129       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.326152       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.326214       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.326285       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.326696       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.326721       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 09:28:32.326727       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1124 09:28:32.326975       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.330291       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-938348" podCIDRs=["10.244.0.0/24"]
	I1124 09:28:32.334080       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:32.377185       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:47.327313       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [4f1fb2a4494d220e3b7f2c400ae09ccc47e3265fdd58dc848f79afe7d63ea283] <==
	I1124 09:28:33.971076       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:28:34.060710       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:28:34.161970       1 shared_informer.go:377] "Caches are synced"
	I1124 09:28:34.162022       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 09:28:34.162120       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:28:34.189304       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:28:34.189391       1 server_linux.go:136] "Using iptables Proxier"
	I1124 09:28:34.197181       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:28:34.200681       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 09:28:34.201281       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:28:34.204226       1 config.go:200] "Starting service config controller"
	I1124 09:28:34.204313       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:28:34.204414       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:28:34.204540       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:28:34.204733       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:28:34.204779       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:28:34.205253       1 config.go:309] "Starting node config controller"
	I1124 09:28:34.205294       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:28:34.304662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:28:34.304816       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:28:34.305171       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:28:34.305372       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [1ba19a39f4c3b71c66b7e9c3404c1d94c5fea970b3a44f05b025ebd74e990ca1] <==
	E1124 09:28:26.411089       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1124 09:28:26.412235       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1124 09:28:26.467427       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1124 09:28:26.467444       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1124 09:28:26.469094       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1124 09:28:26.469094       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1124 09:28:26.544826       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1124 09:28:26.545905       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1124 09:28:26.583516       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1124 09:28:26.584778       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1124 09:28:26.639929       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1124 09:28:26.641021       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1124 09:28:26.654160       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1124 09:28:26.655216       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1124 09:28:26.665179       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1124 09:28:26.666525       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1124 09:28:26.677664       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1124 09:28:26.678890       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1124 09:28:26.728393       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1124 09:28:26.729436       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1124 09:28:26.731329       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1124 09:28:26.732241       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1124 09:28:26.762614       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1124 09:28:26.763736       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1124 09:28:28.508904       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 09:28:33 no-preload-938348 kubelet[2545]: I1124 09:28:33.456622    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/045fb194-89ac-48bb-a9af-24c93032274f-kube-proxy\") pod \"kube-proxy-smqgp\" (UID: \"045fb194-89ac-48bb-a9af-24c93032274f\") " pod="kube-system/kube-proxy-smqgp"
	Nov 24 09:28:33 no-preload-938348 kubelet[2545]: I1124 09:28:33.456652    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/045fb194-89ac-48bb-a9af-24c93032274f-xtables-lock\") pod \"kube-proxy-smqgp\" (UID: \"045fb194-89ac-48bb-a9af-24c93032274f\") " pod="kube-system/kube-proxy-smqgp"
	Nov 24 09:28:33 no-preload-938348 kubelet[2545]: I1124 09:28:33.456676    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/045fb194-89ac-48bb-a9af-24c93032274f-lib-modules\") pod \"kube-proxy-smqgp\" (UID: \"045fb194-89ac-48bb-a9af-24c93032274f\") " pod="kube-system/kube-proxy-smqgp"
	Nov 24 09:28:33 no-preload-938348 kubelet[2545]: I1124 09:28:33.456700    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz6h4\" (UniqueName: \"kubernetes.io/projected/045fb194-89ac-48bb-a9af-24c93032274f-kube-api-access-tz6h4\") pod \"kube-proxy-smqgp\" (UID: \"045fb194-89ac-48bb-a9af-24c93032274f\") " pod="kube-system/kube-proxy-smqgp"
	Nov 24 09:28:33 no-preload-938348 kubelet[2545]: I1124 09:28:33.456723    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ade02f32-ef6b-4bca-b2da-3a67433a796c-cni-cfg\") pod \"kindnet-zrnnf\" (UID: \"ade02f32-ef6b-4bca-b2da-3a67433a796c\") " pod="kube-system/kindnet-zrnnf"
	Nov 24 09:28:33 no-preload-938348 kubelet[2545]: I1124 09:28:33.456777    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ade02f32-ef6b-4bca-b2da-3a67433a796c-xtables-lock\") pod \"kindnet-zrnnf\" (UID: \"ade02f32-ef6b-4bca-b2da-3a67433a796c\") " pod="kube-system/kindnet-zrnnf"
	Nov 24 09:28:33 no-preload-938348 kubelet[2545]: I1124 09:28:33.456813    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9jjs\" (UniqueName: \"kubernetes.io/projected/ade02f32-ef6b-4bca-b2da-3a67433a796c-kube-api-access-j9jjs\") pod \"kindnet-zrnnf\" (UID: \"ade02f32-ef6b-4bca-b2da-3a67433a796c\") " pod="kube-system/kindnet-zrnnf"
	Nov 24 09:28:34 no-preload-938348 kubelet[2545]: I1124 09:28:34.199623    2545 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-smqgp" podStartSLOduration=1.199581077 podStartE2EDuration="1.199581077s" podCreationTimestamp="2025-11-24 09:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:28:34.199012762 +0000 UTC m=+6.165777452" watchObservedRunningTime="2025-11-24 09:28:34.199581077 +0000 UTC m=+6.166345768"
	Nov 24 09:28:35 no-preload-938348 kubelet[2545]: I1124 09:28:35.195825    2545 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-zrnnf" podStartSLOduration=0.830756258 podStartE2EDuration="2.195808921s" podCreationTimestamp="2025-11-24 09:28:33 +0000 UTC" firstStartedPulling="2025-11-24 09:28:33.752584945 +0000 UTC m=+5.719349617" lastFinishedPulling="2025-11-24 09:28:35.117637608 +0000 UTC m=+7.084402280" observedRunningTime="2025-11-24 09:28:35.195794008 +0000 UTC m=+7.162558698" watchObservedRunningTime="2025-11-24 09:28:35.195808921 +0000 UTC m=+7.162573610"
	Nov 24 09:28:36 no-preload-938348 kubelet[2545]: E1124 09:28:36.551465    2545 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-938348" containerName="kube-apiserver"
	Nov 24 09:28:39 no-preload-938348 kubelet[2545]: E1124 09:28:39.599502    2545 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-938348" containerName="etcd"
	Nov 24 09:28:41 no-preload-938348 kubelet[2545]: E1124 09:28:41.877625    2545 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-938348" containerName="kube-scheduler"
	Nov 24 09:28:42 no-preload-938348 kubelet[2545]: E1124 09:28:42.077785    2545 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-938348" containerName="kube-controller-manager"
	Nov 24 09:28:45 no-preload-938348 kubelet[2545]: I1124 09:28:45.736094    2545 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Nov 24 09:28:45 no-preload-938348 kubelet[2545]: I1124 09:28:45.851315    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgnt\" (UniqueName: \"kubernetes.io/projected/701c213c-777c-488b-972b-2c1c4ad85d6a-kube-api-access-nlgnt\") pod \"storage-provisioner\" (UID: \"701c213c-777c-488b-972b-2c1c4ad85d6a\") " pod="kube-system/storage-provisioner"
	Nov 24 09:28:45 no-preload-938348 kubelet[2545]: I1124 09:28:45.851431    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f976359-8745-4fe5-8cc4-df9cafaca113-config-volume\") pod \"coredns-7d764666f9-ll2c4\" (UID: \"9f976359-8745-4fe5-8cc4-df9cafaca113\") " pod="kube-system/coredns-7d764666f9-ll2c4"
	Nov 24 09:28:45 no-preload-938348 kubelet[2545]: I1124 09:28:45.851470    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkg52\" (UniqueName: \"kubernetes.io/projected/9f976359-8745-4fe5-8cc4-df9cafaca113-kube-api-access-rkg52\") pod \"coredns-7d764666f9-ll2c4\" (UID: \"9f976359-8745-4fe5-8cc4-df9cafaca113\") " pod="kube-system/coredns-7d764666f9-ll2c4"
	Nov 24 09:28:45 no-preload-938348 kubelet[2545]: I1124 09:28:45.851511    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/701c213c-777c-488b-972b-2c1c4ad85d6a-tmp\") pod \"storage-provisioner\" (UID: \"701c213c-777c-488b-972b-2c1c4ad85d6a\") " pod="kube-system/storage-provisioner"
	Nov 24 09:28:46 no-preload-938348 kubelet[2545]: E1124 09:28:46.211478    2545 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ll2c4" containerName="coredns"
	Nov 24 09:28:46 no-preload-938348 kubelet[2545]: I1124 09:28:46.222777    2545 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.222756674 podStartE2EDuration="12.222756674s" podCreationTimestamp="2025-11-24 09:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:28:46.222503086 +0000 UTC m=+18.189267778" watchObservedRunningTime="2025-11-24 09:28:46.222756674 +0000 UTC m=+18.189521364"
	Nov 24 09:28:46 no-preload-938348 kubelet[2545]: I1124 09:28:46.238813    2545 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ll2c4" podStartSLOduration=13.238792166 podStartE2EDuration="13.238792166s" podCreationTimestamp="2025-11-24 09:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:28:46.238721504 +0000 UTC m=+18.205486194" watchObservedRunningTime="2025-11-24 09:28:46.238792166 +0000 UTC m=+18.205556855"
	Nov 24 09:28:46 no-preload-938348 kubelet[2545]: E1124 09:28:46.556044    2545 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-938348" containerName="kube-apiserver"
	Nov 24 09:28:47 no-preload-938348 kubelet[2545]: E1124 09:28:47.213554    2545 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ll2c4" containerName="coredns"
	Nov 24 09:28:48 no-preload-938348 kubelet[2545]: E1124 09:28:48.215521    2545 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ll2c4" containerName="coredns"
	Nov 24 09:28:48 no-preload-938348 kubelet[2545]: I1124 09:28:48.570375    2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwkx\" (UniqueName: \"kubernetes.io/projected/4f8a9222-5610-494c-8cd8-a464fdacd234-kube-api-access-gbwkx\") pod \"busybox\" (UID: \"4f8a9222-5610-494c-8cd8-a464fdacd234\") " pod="default/busybox"
	
	
	==> storage-provisioner [05933f92aad853fbc40a70b38eb0c5fdaaec1f1465331b67f1ac0fe15f7c9fac] <==
	I1124 09:28:46.139898       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:28:46.150489       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:28:46.150559       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:28:46.153413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:46.162087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:28:46.162211       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:28:46.162308       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3e597c6-18df-436b-9dd0-bf6a334e6e38", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-938348_742499fd-4872-4d7c-92e3-82f0f9e47808 became leader
	I1124 09:28:46.162370       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-938348_742499fd-4872-4d7c-92e3-82f0f9e47808!
	W1124 09:28:46.164570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:46.170934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:28:46.262612       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-938348_742499fd-4872-4d7c-92e3-82f0f9e47808!
	W1124 09:28:48.174941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:48.180893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:50.184065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:50.189846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:52.192974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:52.198658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:54.203017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:54.207811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:56.211363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:56.215448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:58.218095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:28:58.305131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-938348 -n no-preload-938348
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-938348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (323.807152ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:29:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-164377 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-164377 describe deploy/metrics-server -n kube-system: exit status 1 (69.958836ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-164377 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-164377
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-164377:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c",
	        "Created": "2025-11-24T09:28:58.752077739Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327550,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:28:58.802773876Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/hostname",
	        "HostsPath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/hosts",
	        "LogPath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c-json.log",
	        "Name": "/default-k8s-diff-port-164377",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-164377:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-164377",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c",
	                "LowerDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-164377",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-164377/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-164377",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-164377",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-164377",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "127cd459f1a532ae06f776d169dbd405585e3746ed9bcdba5d84bf665d6a46f4",
	            "SandboxKey": "/var/run/docker/netns/127cd459f1a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-164377": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1e00630149587d24459445d686d13d40af862a7ea70db024de88f2ab8bf6b09",
	                    "EndpointID": "7cb3d3e5e9ed9ff7e031812555358a3aa70142eda27833791f51136043cc50bd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "b2:af:56:95:5e:67",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-164377",
	                        "83d485128258"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-164377 logs -n 25
E1124 09:29:46.161032    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:29:46.167511    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:29:46.179046    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:29:46.200870    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:29:46.242320    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:29:46.323727    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:29:46.485532    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:29:46.806717    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-164377 logs -n 25: (1.325055913s)
E1124 09:29:47.448027    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-949664 sudo cri-dockerd --version                                                                                                                                                                                                          │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                            │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo systemctl cat containerd --no-pager                                                                                                                                                                                            │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                     │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-767267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo containerd config dump                                                                                                                                                                                                         │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                  │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl cat crio --no-pager                                                                                                                                                                                                  │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo crio config                                                                                                                                                                                                                    │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ delete  │ -p bridge-949664                                                                                                                                                                                                                                     │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ stop    │ -p old-k8s-version-767267 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p no-preload-938348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ stop    │ -p no-preload-938348 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-767267 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p no-preload-938348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-967467                                                                                                                                                                                                                         │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:29:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:29:22.272630  335638 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:29:22.272773  335638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:29:22.272780  335638 out.go:374] Setting ErrFile to fd 2...
	I1124 09:29:22.272786  335638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:29:22.273086  335638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:29:22.273711  335638 out.go:368] Setting JSON to false
	I1124 09:29:22.275349  335638 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4308,"bootTime":1763972254,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:29:22.275434  335638 start.go:143] virtualization: kvm guest
	I1124 09:29:22.277385  335638 out.go:179] * [newest-cni-639420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:29:22.280788  335638 notify.go:221] Checking for updates...
	I1124 09:29:22.281556  335638 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:29:22.285973  335638 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:29:22.290210  335638 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:29:22.295020  335638 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:29:22.296440  335638 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:29:22.297660  335638 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:29:22.300201  335638 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:29:22.300438  335638 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:29:22.300608  335638 config.go:182] Loaded profile config "old-k8s-version-767267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 09:29:22.300773  335638 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:29:22.338416  335638 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:29:22.338809  335638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:29:22.436220  335638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 09:29:22.419440868 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:29:22.436383  335638 docker.go:319] overlay module found
	I1124 09:29:22.438008  335638 out.go:179] * Using the docker driver based on user configuration
	I1124 09:29:22.439426  335638 start.go:309] selected driver: docker
	I1124 09:29:22.439445  335638 start.go:927] validating driver "docker" against <nil>
	I1124 09:29:22.439461  335638 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:29:22.440211  335638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:29:22.545267  335638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 09:29:22.530190875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:29:22.545483  335638 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1124 09:29:22.545524  335638 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1124 09:29:22.545801  335638 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 09:29:22.547822  335638 out.go:179] * Using Docker driver with root privileges
	I1124 09:29:22.550027  335638 cni.go:84] Creating CNI manager for ""
	I1124 09:29:22.550124  335638 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:29:22.550136  335638 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:29:22.550237  335638 start.go:353] cluster config:
	{Name:newest-cni-639420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:29:22.552057  335638 out.go:179] * Starting "newest-cni-639420" primary control-plane node in "newest-cni-639420" cluster
	I1124 09:29:22.553907  335638 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:29:22.556029  335638 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:29:22.557293  335638 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:29:22.557467  335638 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	W1124 09:29:22.584375  335638 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1124 09:29:22.588704  335638 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:29:22.588745  335638 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:29:22.602625  335638 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1124 09:29:22.602854  335638 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/config.json ...
	I1124 09:29:22.602894  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/config.json: {Name:mk7ccc8de387c8d9d793f2cc19c6bdd452036813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:22.603077  335638 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:29:22.603108  335638 start.go:360] acquireMachinesLock for newest-cni-639420: {Name:mka282f4f1046f315e8564ac5db60bb2850ef5e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:22.603114  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:22.603166  335638 start.go:364] duration metric: took 43.146µs to acquireMachinesLock for "newest-cni-639420"
	I1124 09:29:22.603190  335638 start.go:93] Provisioning new machine with config: &{Name:newest-cni-639420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:29:22.603284  335638 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:29:22.266382  326387 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:29:22.266403  326387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:29:22.266467  326387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:29:22.267204  326387 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-164377"
	I1124 09:29:22.267247  326387 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:29:22.267721  326387 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:29:22.305659  326387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:29:22.309583  326387 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:29:22.309602  326387 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:29:22.309658  326387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:29:22.340814  326387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:29:22.361110  326387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:29:22.421701  326387 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:29:22.451252  326387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:29:22.514208  326387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:29:22.634103  326387 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 09:29:22.635287  326387 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:29:22.897508  326387 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:29:22.127319  333403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:29:22.127371  333403 machine.go:97] duration metric: took 4.259939775s to provisionDockerMachine
	I1124 09:29:22.127386  333403 start.go:293] postStartSetup for "no-preload-938348" (driver="docker")
	I1124 09:29:22.127401  333403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:29:22.127467  333403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:29:22.127527  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:22.150125  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:22.262641  333403 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:29:22.267327  333403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:29:22.267426  333403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:29:22.267438  333403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:29:22.267509  333403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:29:22.267627  333403 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:29:22.267738  333403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:29:22.279630  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:29:22.318304  333403 start.go:296] duration metric: took 190.884017ms for postStartSetup
	I1124 09:29:22.318392  333403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:29:22.318441  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:22.349125  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:22.491751  333403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:29:22.499171  333403 fix.go:56] duration metric: took 5.297394876s for fixHost
	I1124 09:29:22.499196  333403 start.go:83] releasing machines lock for "no-preload-938348", held for 5.297447121s
	I1124 09:29:22.499275  333403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-938348
	I1124 09:29:22.528437  333403 ssh_runner.go:195] Run: cat /version.json
	I1124 09:29:22.528502  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:22.528696  333403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:29:22.528785  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:22.556869  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:22.557396  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:22.749108  333403 ssh_runner.go:195] Run: systemctl --version
	I1124 09:29:22.761160  333403 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:29:22.810862  333403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:29:22.820245  333403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:29:22.820398  333403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:29:22.832582  333403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:29:22.832607  333403 start.go:496] detecting cgroup driver to use...
	I1124 09:29:22.832638  333403 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:29:22.832680  333403 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:29:22.856642  333403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:29:22.878589  333403 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:29:22.878700  333403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:29:22.898010  333403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:29:22.913490  333403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:29:23.015325  333403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:29:23.109378  333403 docker.go:234] disabling docker service ...
	I1124 09:29:23.109472  333403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:29:23.131263  333403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:29:23.146998  333403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:29:23.252976  333403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:29:23.357243  333403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:29:23.370932  333403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:29:23.386424  333403 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:23.534936  333403 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:29:23.535002  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.545781  333403 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:29:23.545842  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.556112  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.566045  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.575401  333403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:29:23.584608  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.594614  333403 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.604212  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.613396  333403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:29:23.622174  333403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:29:23.629636  333403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:23.729103  333403 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:29:23.879968  333403 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:29:23.880049  333403 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:29:23.884669  333403 start.go:564] Will wait 60s for crictl version
	I1124 09:29:23.884720  333403 ssh_runner.go:195] Run: which crictl
	I1124 09:29:23.889584  333403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:29:23.920505  333403 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:29:23.920586  333403 ssh_runner.go:195] Run: crio --version
	I1124 09:29:23.955234  333403 ssh_runner.go:195] Run: crio --version
	I1124 09:29:24.004516  333403 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:29:22.899175  326387 addons.go:530] duration metric: took 663.317847ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:29:23.139102  326387 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-164377" context rescaled to 1 replicas
	I1124 09:29:20.654404  330481 addons.go:530] duration metric: took 3.007173003s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1124 09:29:20.655727  330481 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1124 09:29:20.655749  330481 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1124 09:29:21.151464  330481 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:29:21.156548  330481 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 09:29:21.157870  330481 api_server.go:141] control plane version: v1.28.0
	I1124 09:29:21.157899  330481 api_server.go:131] duration metric: took 507.328952ms to wait for apiserver health ...
	I1124 09:29:21.157911  330481 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:29:21.163672  330481 system_pods.go:59] 8 kube-system pods found
	I1124 09:29:21.163710  330481 system_pods.go:61] "coredns-5dd5756b68-gmgwv" [fa53b4e5-62ed-42ac-82be-5f220cd9ab0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:21.163721  330481 system_pods.go:61] "etcd-old-k8s-version-767267" [aff80338-4222-4ee0-990e-71d85ab84883] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:29:21.163730  330481 system_pods.go:61] "kindnet-8tdrm" [de72ff2b-7361-460c-b1e8-288fb9a6eb03] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:29:21.163739  330481 system_pods.go:61] "kube-apiserver-old-k8s-version-767267" [4af980c6-66d6-4b78-86a1-e0560b86f196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:29:21.163753  330481 system_pods.go:61] "kube-controller-manager-old-k8s-version-767267" [ad989491-57c5-4844-9a39-61df766e8110] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:29:21.163763  330481 system_pods.go:61] "kube-proxy-b8kgc" [318115cc-de22-4a55-a7aa-2acc886827d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:29:21.163773  330481 system_pods.go:61] "kube-scheduler-old-k8s-version-767267" [d6f9519e-96af-4db1-855c-b4ac6e09c533] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:29:21.163783  330481 system_pods.go:61] "storage-provisioner" [6347c3c7-cb5b-42ab-abb8-9ca37af285b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:21.163797  330481 system_pods.go:74] duration metric: took 5.878829ms to wait for pod list to return data ...
	I1124 09:29:21.163809  330481 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:29:21.167221  330481 default_sa.go:45] found service account: "default"
	I1124 09:29:21.167291  330481 default_sa.go:55] duration metric: took 3.474305ms for default service account to be created ...
	I1124 09:29:21.167308  330481 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:29:21.171183  330481 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:21.171207  330481 system_pods.go:89] "coredns-5dd5756b68-gmgwv" [fa53b4e5-62ed-42ac-82be-5f220cd9ab0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:21.171230  330481 system_pods.go:89] "etcd-old-k8s-version-767267" [aff80338-4222-4ee0-990e-71d85ab84883] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:29:21.171241  330481 system_pods.go:89] "kindnet-8tdrm" [de72ff2b-7361-460c-b1e8-288fb9a6eb03] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:29:21.171254  330481 system_pods.go:89] "kube-apiserver-old-k8s-version-767267" [4af980c6-66d6-4b78-86a1-e0560b86f196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:29:21.171263  330481 system_pods.go:89] "kube-controller-manager-old-k8s-version-767267" [ad989491-57c5-4844-9a39-61df766e8110] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:29:21.171275  330481 system_pods.go:89] "kube-proxy-b8kgc" [318115cc-de22-4a55-a7aa-2acc886827d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:29:21.171285  330481 system_pods.go:89] "kube-scheduler-old-k8s-version-767267" [d6f9519e-96af-4db1-855c-b4ac6e09c533] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:29:21.171291  330481 system_pods.go:89] "storage-provisioner" [6347c3c7-cb5b-42ab-abb8-9ca37af285b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:21.171300  330481 system_pods.go:126] duration metric: took 3.986426ms to wait for k8s-apps to be running ...
	I1124 09:29:21.171310  330481 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:29:21.171387  330481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:29:21.203837  330481 system_svc.go:56] duration metric: took 32.519797ms WaitForService to wait for kubelet
	I1124 09:29:21.203864  330481 kubeadm.go:587] duration metric: took 3.55738421s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:29:21.203880  330481 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:29:21.207017  330481 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:29:21.207050  330481 node_conditions.go:123] node cpu capacity is 8
	I1124 09:29:21.207067  330481 node_conditions.go:105] duration metric: took 3.18201ms to run NodePressure ...
	I1124 09:29:21.207081  330481 start.go:242] waiting for startup goroutines ...
	I1124 09:29:21.207090  330481 start.go:247] waiting for cluster config update ...
	I1124 09:29:21.207102  330481 start.go:256] writing updated cluster config ...
	I1124 09:29:21.207440  330481 ssh_runner.go:195] Run: rm -f paused
	I1124 09:29:21.212241  330481 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:29:21.218052  330481 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gmgwv" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:29:23.225315  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	I1124 09:29:24.005626  333403 cli_runner.go:164] Run: docker network inspect no-preload-938348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:29:24.029008  333403 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 09:29:24.034021  333403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:29:24.047095  333403 kubeadm.go:884] updating cluster {Name:no-preload-938348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:29:24.047302  333403 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:24.218154  333403 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:24.389851  333403 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:24.541160  333403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:29:24.541226  333403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:29:24.578918  333403 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:29:24.578939  333403 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:29:24.578949  333403 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1124 09:29:24.579051  333403 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-938348 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:29:24.579135  333403 ssh_runner.go:195] Run: crio config
	I1124 09:29:24.624916  333403 cni.go:84] Creating CNI manager for ""
	I1124 09:29:24.624945  333403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:29:24.624965  333403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:29:24.624998  333403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-938348 NodeName:no-preload-938348 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:29:24.625197  333403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-938348"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:29:24.625264  333403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:29:24.634370  333403 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:29:24.634419  333403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:29:24.642798  333403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:29:24.656549  333403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:29:24.669132  333403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1124 09:29:24.681839  333403 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:29:24.685751  333403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:29:24.695690  333403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:24.778867  333403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:29:24.806996  333403 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348 for IP: 192.168.94.2
	I1124 09:29:24.807015  333403 certs.go:195] generating shared ca certs ...
	I1124 09:29:24.807035  333403 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:24.807182  333403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:29:24.807254  333403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:29:24.807267  333403 certs.go:257] generating profile certs ...
	I1124 09:29:24.807411  333403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/client.key
	I1124 09:29:24.807497  333403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.key.64ae9983
	I1124 09:29:24.807556  333403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.key
	I1124 09:29:24.807691  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:29:24.807735  333403 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:29:24.807749  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:29:24.807783  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:29:24.807819  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:29:24.807858  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:29:24.807920  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:29:24.808541  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:29:24.827500  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:29:24.848541  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:29:24.872286  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:29:24.896028  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:29:24.917352  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:29:24.934258  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:29:24.951088  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:29:24.967701  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:29:24.984489  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:29:25.002482  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:29:25.020506  333403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:29:25.033647  333403 ssh_runner.go:195] Run: openssl version
	I1124 09:29:25.039835  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:29:25.048276  333403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:29:25.051920  333403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:29:25.051971  333403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:29:25.089557  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:29:25.098198  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:29:25.107138  333403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:25.111148  333403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:25.111218  333403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:25.149419  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:29:25.158007  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:29:25.166936  333403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:29:25.170800  333403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:29:25.170846  333403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:29:25.206573  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:29:25.214835  333403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:29:25.218635  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:29:25.254289  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:29:25.291772  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:29:25.340024  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:29:25.389077  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:29:25.438716  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:29:25.497608  333403 kubeadm.go:401] StartCluster: {Name:no-preload-938348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:29:25.497724  333403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:29:25.497778  333403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:29:25.534464  333403 cri.go:89] found id: "3ad41a5ac915a2420a94ca88b9c3279566a6e896889754dc508c89ee3c9211e9"
	I1124 09:29:25.534487  333403 cri.go:89] found id: "36d1fad8848862ea43c7b05032173e3e3b7f0933dc08295c02778fb4b025a652"
	I1124 09:29:25.534493  333403 cri.go:89] found id: "a9fb0f7c0718dd8bc54d167231997e0c85b183e6aa45ef9d18e4350114c5d548"
	I1124 09:29:25.534513  333403 cri.go:89] found id: "bfa3e672acac8938f9e806c8ee2b3dfe80d66448b24724b3bbf29f8c10551751"
	I1124 09:29:25.534518  333403 cri.go:89] found id: ""
	I1124 09:29:25.534562  333403 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:29:25.547091  333403 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:29:25Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:29:25.547156  333403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:29:25.557888  333403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:29:25.557906  333403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:29:25.557949  333403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:29:25.565397  333403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:29:25.566348  333403 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-938348" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:29:25.566811  333403 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-938348" cluster setting kubeconfig missing "no-preload-938348" context setting]
	I1124 09:29:25.567631  333403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:25.569405  333403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:29:25.577075  333403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 09:29:25.577105  333403 kubeadm.go:602] duration metric: took 19.193886ms to restartPrimaryControlPlane
	I1124 09:29:25.577114  333403 kubeadm.go:403] duration metric: took 79.517412ms to StartCluster
	I1124 09:29:25.577130  333403 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:25.577190  333403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:29:25.578596  333403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:25.578833  333403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:29:25.578891  333403 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:29:25.578988  333403 addons.go:70] Setting storage-provisioner=true in profile "no-preload-938348"
	I1124 09:29:25.579008  333403 addons.go:239] Setting addon storage-provisioner=true in "no-preload-938348"
	W1124 09:29:25.579016  333403 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:29:25.579013  333403 addons.go:70] Setting dashboard=true in profile "no-preload-938348"
	I1124 09:29:25.579038  333403 addons.go:239] Setting addon dashboard=true in "no-preload-938348"
	I1124 09:29:25.579043  333403 host.go:66] Checking if "no-preload-938348" exists ...
	W1124 09:29:25.579048  333403 addons.go:248] addon dashboard should already be in state true
	I1124 09:29:25.579047  333403 addons.go:70] Setting default-storageclass=true in profile "no-preload-938348"
	I1124 09:29:25.579077  333403 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-938348"
	I1124 09:29:25.579079  333403 host.go:66] Checking if "no-preload-938348" exists ...
	I1124 09:29:25.579056  333403 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:29:25.579415  333403 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:29:25.579572  333403 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:29:25.579577  333403 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:29:25.581476  333403 out.go:179] * Verifying Kubernetes components...
	I1124 09:29:25.582938  333403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:25.614256  333403 addons.go:239] Setting addon default-storageclass=true in "no-preload-938348"
	W1124 09:29:25.614279  333403 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:29:25.614305  333403 host.go:66] Checking if "no-preload-938348" exists ...
	I1124 09:29:25.614804  333403 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:29:25.617177  333403 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:25.617869  333403 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:29:25.618562  333403 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:29:25.618581  333403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:29:25.618636  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:25.620039  333403 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:29:22.606204  335638 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:29:22.606500  335638 start.go:159] libmachine.API.Create for "newest-cni-639420" (driver="docker")
	I1124 09:29:22.606540  335638 client.go:173] LocalClient.Create starting
	I1124 09:29:22.606611  335638 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem
	I1124 09:29:22.606649  335638 main.go:143] libmachine: Decoding PEM data...
	I1124 09:29:22.606672  335638 main.go:143] libmachine: Parsing certificate...
	I1124 09:29:22.606733  335638 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem
	I1124 09:29:22.606768  335638 main.go:143] libmachine: Decoding PEM data...
	I1124 09:29:22.606791  335638 main.go:143] libmachine: Parsing certificate...
	I1124 09:29:22.607211  335638 cli_runner.go:164] Run: docker network inspect newest-cni-639420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:29:22.635929  335638 cli_runner.go:211] docker network inspect newest-cni-639420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:29:22.636004  335638 network_create.go:284] running [docker network inspect newest-cni-639420] to gather additional debugging logs...
	I1124 09:29:22.636023  335638 cli_runner.go:164] Run: docker network inspect newest-cni-639420
	W1124 09:29:22.660145  335638 cli_runner.go:211] docker network inspect newest-cni-639420 returned with exit code 1
	I1124 09:29:22.660178  335638 network_create.go:287] error running [docker network inspect newest-cni-639420]: docker network inspect newest-cni-639420: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-639420 not found
	I1124 09:29:22.660195  335638 network_create.go:289] output of [docker network inspect newest-cni-639420]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-639420 not found
	
	** /stderr **
	I1124 09:29:22.660349  335638 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:29:22.692502  335638 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2543a3a5b30f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:09:61:f4:32:5e} reservation:<nil>}
	I1124 09:29:22.696047  335638 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c977c796f084 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:34:cc:6d:f9:2b} reservation:<nil>}
	I1124 09:29:22.697241  335638 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2994a163bb80 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:ca:61:f0:c2:2e} reservation:<nil>}
	I1124 09:29:22.698001  335638 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49a891848d14 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:26:80:16:6d:29} reservation:<nil>}
	I1124 09:29:22.699090  335638 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c1e006301495 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:d2:93:0f:4e:2a:a4} reservation:<nil>}
	I1124 09:29:22.699942  335638 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-3f03f3b5e2bf IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:d5:81:14:0a:58} reservation:<nil>}
	I1124 09:29:22.701166  335638 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4e360}
	I1124 09:29:22.701203  335638 network_create.go:124] attempt to create docker network newest-cni-639420 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 09:29:22.701253  335638 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-639420 newest-cni-639420
	I1124 09:29:22.765511  335638 network_create.go:108] docker network newest-cni-639420 192.168.103.0/24 created
	I1124 09:29:22.765544  335638 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-639420" container
	I1124 09:29:22.765607  335638 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:29:22.785985  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:22.791490  335638 cli_runner.go:164] Run: docker volume create newest-cni-639420 --label name.minikube.sigs.k8s.io=newest-cni-639420 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:29:22.815309  335638 oci.go:103] Successfully created a docker volume newest-cni-639420
	I1124 09:29:22.815447  335638 cli_runner.go:164] Run: docker run --rm --name newest-cni-639420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-639420 --entrypoint /usr/bin/test -v newest-cni-639420:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:29:22.965281  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:23.125493  335638 cache.go:107] acquiring lock: {Name:mk50e8a993397cfd35eb04bbf3ec3f2f16922e03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.125590  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:29:23.125599  335638 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 126.945µs
	I1124 09:29:23.125612  335638 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:29:23.125628  335638 cache.go:107] acquiring lock: {Name:mk44ea28b5ef083e518e10f8b09fe20e117fa612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.125665  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:29:23.125672  335638 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 45.529µs
	I1124 09:29:23.125680  335638 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:29:23.125696  335638 cache.go:107] acquiring lock: {Name:mk22cdf247cbd1eba82607ef17480dc2601681cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.125763  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:29:23.125776  335638 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 82.084µs
	I1124 09:29:23.125785  335638 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:29:23.125801  335638 cache.go:107] acquiring lock: {Name:mkbf0dee95f0ab47974350aecf97d10e64a67897 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.125892  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:29:23.125899  335638 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 101.39µs
	I1124 09:29:23.125908  335638 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:29:23.125921  335638 cache.go:107] acquiring lock: {Name:mk02678e83bd0bc783689569fa5806aa92d36dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.126065  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:29:23.126072  335638 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 153.418µs
	I1124 09:29:23.126079  335638 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:29:23.126093  335638 cache.go:107] acquiring lock: {Name:mk4b39f728589920114b6f2c68f5093e514fadca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.126142  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:29:23.126148  335638 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 58.094µs
	I1124 09:29:23.126158  335638 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:29:23.126171  335638 cache.go:107] acquiring lock: {Name:mk7db92c93cf19a2f7751497e327ce09d843bbd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.126202  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:29:23.126208  335638 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 39.873µs
	I1124 09:29:23.126215  335638 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:29:23.126227  335638 cache.go:107] acquiring lock: {Name:mk690ae61adbe621ac8f3906853ffca5c6beb812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.126265  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:29:23.126270  335638 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 46.169µs
	I1124 09:29:23.126277  335638 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:29:23.126285  335638 cache.go:87] Successfully saved all images to host disk.
	I1124 09:29:23.264663  335638 oci.go:107] Successfully prepared a docker volume newest-cni-639420
	I1124 09:29:23.264743  335638 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1124 09:29:23.264839  335638 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:29:23.264876  335638 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:29:23.264920  335638 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:29:23.340405  335638 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-639420 --name newest-cni-639420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-639420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-639420 --network newest-cni-639420 --ip 192.168.103.2 --volume newest-cni-639420:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:29:23.676315  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Running}}
	I1124 09:29:23.696133  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:29:23.716195  335638 cli_runner.go:164] Run: docker exec newest-cni-639420 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:29:23.767472  335638 oci.go:144] the created container "newest-cni-639420" has a running status.
	I1124 09:29:23.767502  335638 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa...
	I1124 09:29:23.880299  335638 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:29:23.913685  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:29:23.938407  335638 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:29:23.938430  335638 kic_runner.go:114] Args: [docker exec --privileged newest-cni-639420 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:29:23.995674  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:29:24.019807  335638 machine.go:94] provisionDockerMachine start ...
	I1124 09:29:24.019886  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:24.039934  335638 main.go:143] libmachine: Using SSH client type: native
	I1124 09:29:24.040236  335638 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1124 09:29:24.040257  335638 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:29:24.194642  335638 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-639420
	
	I1124 09:29:24.194671  335638 ubuntu.go:182] provisioning hostname "newest-cni-639420"
	I1124 09:29:24.194731  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:24.214977  335638 main.go:143] libmachine: Using SSH client type: native
	I1124 09:29:24.215249  335638 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1124 09:29:24.215272  335638 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-639420 && echo "newest-cni-639420" | sudo tee /etc/hostname
	I1124 09:29:24.375311  335638 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-639420
	
	I1124 09:29:24.375402  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:24.396206  335638 main.go:143] libmachine: Using SSH client type: native
	I1124 09:29:24.396483  335638 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1124 09:29:24.396508  335638 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-639420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-639420/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-639420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:29:24.542631  335638 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:29:24.542659  335638 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 09:29:24.542699  335638 ubuntu.go:190] setting up certificates
	I1124 09:29:24.542712  335638 provision.go:84] configureAuth start
	I1124 09:29:24.542757  335638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-639420
	I1124 09:29:24.563950  335638 provision.go:143] copyHostCerts
	I1124 09:29:24.564035  335638 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem, removing ...
	I1124 09:29:24.564060  335638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem
	I1124 09:29:24.564140  335638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 09:29:24.564260  335638 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem, removing ...
	I1124 09:29:24.564274  335638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem
	I1124 09:29:24.564314  335638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 09:29:24.564451  335638 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem, removing ...
	I1124 09:29:24.564466  335638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem
	I1124 09:29:24.564508  335638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 09:29:24.564572  335638 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.newest-cni-639420 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-639420]
	I1124 09:29:24.634070  335638 provision.go:177] copyRemoteCerts
	I1124 09:29:24.634118  335638 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:29:24.634148  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:24.654625  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:24.757462  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:29:24.776220  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:29:24.794520  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:29:24.813093  335638 provision.go:87] duration metric: took 270.366164ms to configureAuth
	I1124 09:29:24.813121  335638 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:29:24.813292  335638 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:29:24.813401  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:24.834251  335638 main.go:143] libmachine: Using SSH client type: native
	I1124 09:29:24.834536  335638 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1124 09:29:24.834557  335638 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:29:25.126491  335638 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:29:25.126511  335638 machine.go:97] duration metric: took 1.106687482s to provisionDockerMachine
	I1124 09:29:25.126521  335638 client.go:176] duration metric: took 2.519971934s to LocalClient.Create
	I1124 09:29:25.126544  335638 start.go:167] duration metric: took 2.520045206s to libmachine.API.Create "newest-cni-639420"
	I1124 09:29:25.126559  335638 start.go:293] postStartSetup for "newest-cni-639420" (driver="docker")
	I1124 09:29:25.126572  335638 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:29:25.126630  335638 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:29:25.126678  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:25.145747  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:25.250031  335638 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:29:25.253397  335638 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:29:25.253419  335638 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:29:25.253429  335638 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:29:25.253477  335638 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:29:25.253562  335638 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:29:25.253646  335638 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:29:25.261375  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:29:25.281316  335638 start.go:296] duration metric: took 154.739012ms for postStartSetup
	I1124 09:29:25.281675  335638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-639420
	I1124 09:29:25.300835  335638 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/config.json ...
	I1124 09:29:25.301127  335638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:29:25.301177  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:25.321665  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:25.432025  335638 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:29:25.437384  335638 start.go:128] duration metric: took 2.834085276s to createHost
	I1124 09:29:25.437412  335638 start.go:83] releasing machines lock for "newest-cni-639420", held for 2.834231966s
	I1124 09:29:25.437518  335638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-639420
	I1124 09:29:25.463404  335638 ssh_runner.go:195] Run: cat /version.json
	I1124 09:29:25.463463  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:25.463541  335638 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:29:25.463627  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:25.489226  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:25.491764  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:25.602110  335638 ssh_runner.go:195] Run: systemctl --version
	I1124 09:29:25.688534  335638 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:29:25.738028  335638 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:29:25.744117  335638 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:29:25.744186  335638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:29:25.776213  335638 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:29:25.776234  335638 start.go:496] detecting cgroup driver to use...
	I1124 09:29:25.776266  335638 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:29:25.776315  335638 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:29:25.800921  335638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:29:25.816518  335638 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:29:25.816573  335638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:29:25.835996  335638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:29:25.859536  335638 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:29:25.956711  335638 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:29:26.048642  335638 docker.go:234] disabling docker service ...
	I1124 09:29:26.048701  335638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:29:26.070200  335638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:29:26.082794  335638 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:29:26.161849  335638 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:29:26.244252  335638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:29:26.256768  335638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:29:26.270526  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:26.413542  335638 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:29:26.413610  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.424485  335638 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:29:26.424548  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.433347  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.442022  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.450535  335638 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:29:26.458406  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.466709  335638 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.485185  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.494936  335638 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:29:26.503451  335638 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:29:26.512446  335638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:26.607745  335638 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:29:26.740004  335638 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:29:26.740072  335638 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:29:26.744918  335638 start.go:564] Will wait 60s for crictl version
	I1124 09:29:26.744980  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:26.749213  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:29:26.778225  335638 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:29:26.778341  335638 ssh_runner.go:195] Run: crio --version
	I1124 09:29:26.817193  335638 ssh_runner.go:195] Run: crio --version
	I1124 09:29:26.857658  335638 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:29:25.621005  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:29:25.621027  333403 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:29:25.621082  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:25.650236  333403 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:29:25.650263  333403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:29:25.650325  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:25.656443  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:25.656558  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:25.680533  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:25.754168  333403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:29:25.768201  333403 node_ready.go:35] waiting up to 6m0s for node "no-preload-938348" to be "Ready" ...
	I1124 09:29:25.780161  333403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:29:25.785322  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:29:25.785364  333403 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:29:25.802823  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:29:25.802851  333403 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:29:25.803558  333403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:29:25.820621  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:29:25.820645  333403 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:29:25.835245  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:29:25.835268  333403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:29:25.851624  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:29:25.851655  333403 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:29:25.868485  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:29:25.868512  333403 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:29:25.881781  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:29:25.881805  333403 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:29:25.898837  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:29:25.898859  333403 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:29:25.913417  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:29:25.913444  333403 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:29:25.929159  333403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:29:26.859114  335638 cli_runner.go:164] Run: docker network inspect newest-cni-639420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:29:26.881311  335638 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:29:26.885555  335638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:29:26.898515  335638 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 09:29:26.899623  335638 kubeadm.go:884] updating cluster {Name:newest-cni-639420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:29:26.899846  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:27.062935  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:27.248815  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:27.420412  333403 node_ready.go:49] node "no-preload-938348" is "Ready"
	I1124 09:29:27.420445  333403 node_ready.go:38] duration metric: took 1.652208535s for node "no-preload-938348" to be "Ready" ...
	I1124 09:29:27.420461  333403 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:29:27.420510  333403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:29:28.112081  333403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.331875566s)
	I1124 09:29:28.112133  333403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.30847927s)
	I1124 09:29:28.112267  333403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.183075802s)
	I1124 09:29:28.112327  333403 api_server.go:72] duration metric: took 2.533463434s to wait for apiserver process to appear ...
	I1124 09:29:28.112396  333403 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:29:28.112418  333403 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:29:28.113923  333403 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-938348 addons enable metrics-server
	
	I1124 09:29:28.118193  333403 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:29:28.118221  333403 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:29:28.122505  333403 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1124 09:29:24.638988  326387 node_ready.go:57] node "default-k8s-diff-port-164377" has "Ready":"False" status (will retry)
	W1124 09:29:27.138848  326387 node_ready.go:57] node "default-k8s-diff-port-164377" has "Ready":"False" status (will retry)
	W1124 09:29:25.726359  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:28.229199  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	I1124 09:29:28.123704  333403 addons.go:530] duration metric: took 2.544816591s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 09:29:28.613397  333403 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:29:28.619304  333403 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:29:28.619375  333403 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:29:29.112960  333403 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:29:29.118507  333403 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 09:29:29.119708  333403 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:29:29.119736  333403 api_server.go:131] duration metric: took 1.00733118s to wait for apiserver health ...
	I1124 09:29:29.119749  333403 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:29:29.123578  333403 system_pods.go:59] 8 kube-system pods found
	I1124 09:29:29.123627  333403 system_pods.go:61] "coredns-7d764666f9-ll2c4" [9f976359-8745-4fe5-8cc4-df9cafaca113] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:29.123647  333403 system_pods.go:61] "etcd-no-preload-938348" [f64c1f91-d65d-483d-9702-da61053fc34e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:29:29.123658  333403 system_pods.go:61] "kindnet-zrnnf" [ade02f32-ef6b-4bca-b2da-3a67433a796c] Running
	I1124 09:29:29.123669  333403 system_pods.go:61] "kube-apiserver-no-preload-938348" [dc59fbc6-9b29-4422-826c-c65c23e5767b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:29:29.123682  333403 system_pods.go:61] "kube-controller-manager-no-preload-938348" [70a934f6-cdab-441e-b04f-cae5940dc0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:29:29.123689  333403 system_pods.go:61] "kube-proxy-smqgp" [045fb194-89ac-48bb-a9af-24c93032274f] Running
	I1124 09:29:29.123698  333403 system_pods.go:61] "kube-scheduler-no-preload-938348" [5799f86f-5b8f-4492-9a26-d7a3749ae301] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:29:29.123704  333403 system_pods.go:61] "storage-provisioner" [701c213c-777c-488b-972b-2c1c4ad85d6a] Running
	I1124 09:29:29.123711  333403 system_pods.go:74] duration metric: took 3.95574ms to wait for pod list to return data ...
	I1124 09:29:29.123727  333403 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:29:29.126930  333403 default_sa.go:45] found service account: "default"
	I1124 09:29:29.126946  333403 default_sa.go:55] duration metric: took 3.214232ms for default service account to be created ...
	I1124 09:29:29.126954  333403 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:29:29.129597  333403 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:29.129621  333403 system_pods.go:89] "coredns-7d764666f9-ll2c4" [9f976359-8745-4fe5-8cc4-df9cafaca113] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:29.129629  333403 system_pods.go:89] "etcd-no-preload-938348" [f64c1f91-d65d-483d-9702-da61053fc34e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:29:29.129637  333403 system_pods.go:89] "kindnet-zrnnf" [ade02f32-ef6b-4bca-b2da-3a67433a796c] Running
	I1124 09:29:29.129646  333403 system_pods.go:89] "kube-apiserver-no-preload-938348" [dc59fbc6-9b29-4422-826c-c65c23e5767b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:29:29.129659  333403 system_pods.go:89] "kube-controller-manager-no-preload-938348" [70a934f6-cdab-441e-b04f-cae5940dc0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:29:29.129670  333403 system_pods.go:89] "kube-proxy-smqgp" [045fb194-89ac-48bb-a9af-24c93032274f] Running
	I1124 09:29:29.129680  333403 system_pods.go:89] "kube-scheduler-no-preload-938348" [5799f86f-5b8f-4492-9a26-d7a3749ae301] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:29:29.129688  333403 system_pods.go:89] "storage-provisioner" [701c213c-777c-488b-972b-2c1c4ad85d6a] Running
	I1124 09:29:29.129696  333403 system_pods.go:126] duration metric: took 2.736234ms to wait for k8s-apps to be running ...
	I1124 09:29:29.129709  333403 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:29:29.129761  333403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:29:29.144790  333403 system_svc.go:56] duration metric: took 15.076412ms WaitForService to wait for kubelet
	I1124 09:29:29.144816  333403 kubeadm.go:587] duration metric: took 3.565955333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:29:29.144836  333403 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:29:29.148142  333403 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:29:29.148166  333403 node_conditions.go:123] node cpu capacity is 8
	I1124 09:29:29.148179  333403 node_conditions.go:105] duration metric: took 3.338764ms to run NodePressure ...
	I1124 09:29:29.148190  333403 start.go:242] waiting for startup goroutines ...
	I1124 09:29:29.148197  333403 start.go:247] waiting for cluster config update ...
	I1124 09:29:29.148207  333403 start.go:256] writing updated cluster config ...
	I1124 09:29:29.148587  333403 ssh_runner.go:195] Run: rm -f paused
	I1124 09:29:29.153407  333403 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:29:29.157433  333403 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ll2c4" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:29:31.163742  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	I1124 09:29:27.419562  335638 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:29:27.419629  335638 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:29:27.460918  335638 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 09:29:27.460945  335638 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 09:29:27.461011  335638 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:27.461369  335638 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.461449  335638 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 09:29:27.461464  335638 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.461655  335638 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.461712  335638 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.461854  335638 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.461897  335638 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.464257  335638 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.464411  335638 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 09:29:27.464628  335638 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.464689  335638 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.464988  335638 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.465073  335638 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.465265  335638 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:27.467072  335638 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.627554  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.628494  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.656948  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1124 09:29:27.659842  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.660130  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.661481  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.669123  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.675953  335638 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1124 09:29:27.675989  335638 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.676029  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.676133  335638 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1124 09:29:27.676154  335638 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.676189  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.718910  335638 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 09:29:27.718963  335638 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 09:29:27.719008  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.722396  335638 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1124 09:29:27.722439  335638 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.722502  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.773905  335638 cache_images.go:118] "registry.k8s.io/etcd:3.5.24-0" needs transfer: "registry.k8s.io/etcd:3.5.24-0" does not exist at hash "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d" in container runtime
	I1124 09:29:27.773953  335638 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.773959  335638 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1124 09:29:27.773992  335638 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.774000  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.774024  335638 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1124 09:29:27.774032  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.774053  335638 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.774087  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.774164  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.774169  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.774222  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:29:27.774246  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.820103  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.820178  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.820194  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.820205  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.820254  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.859804  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.859896  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:29:27.869005  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.869109  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.869193  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.870157  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.873591  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.914164  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:29:27.914280  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.935841  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.940089  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.940094  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.940233  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 09:29:27.940316  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:29:27.940653  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 09:29:27.940724  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:29:27.970433  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 09:29:27.970782  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 09:29:27.982144  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 09:29:27.982159  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 09:29:27.982326  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:29:27.982393  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:29:27.988403  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0
	I1124 09:29:27.988475  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 09:29:27.988506  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:29:27.988569  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:29:27.988584  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1124 09:29:27.988601  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1124 09:29:27.988570  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1124 09:29:27.988633  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1124 09:29:27.988663  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 09:29:27.988684  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 09:29:27.989467  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1124 09:29:27.989489  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1124 09:29:27.989530  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1124 09:29:27.989547  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1124 09:29:28.003399  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1124 09:29:28.003439  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1124 09:29:28.003507  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.24-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.24-0': No such file or directory
	I1124 09:29:28.003519  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 --> /var/lib/minikube/images/etcd_3.5.24-0 (23728640 bytes)
	I1124 09:29:28.075946  335638 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 09:29:28.076025  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1124 09:29:28.435196  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:28.467390  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 09:29:28.467433  335638 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:29:28.467481  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:29:28.544058  335638 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 09:29:28.544104  335638 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:28.544158  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:29.658664  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.191161688s)
	I1124 09:29:29.658696  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 09:29:29.658705  335638 ssh_runner.go:235] Completed: which crictl: (1.114530607s)
	I1124 09:29:29.658722  335638 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:29:29.658761  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:29:29.658764  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:30.831023  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.172237149s)
	I1124 09:29:30.831058  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 09:29:30.831097  335638 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:29:30.831104  335638 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.172303496s)
	I1124 09:29:30.831157  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:30.831161  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:29:30.857684  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:32.198551  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.367365601s)
	I1124 09:29:32.198579  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 09:29:32.198601  335638 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:29:32.198644  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:29:32.198644  335638 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.3409304s)
	I1124 09:29:32.198688  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 09:29:32.198817  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1124 09:29:29.138937  326387 node_ready.go:57] node "default-k8s-diff-port-164377" has "Ready":"False" status (will retry)
	W1124 09:29:31.140211  326387 node_ready.go:57] node "default-k8s-diff-port-164377" has "Ready":"False" status (will retry)
	W1124 09:29:33.639508  326387 node_ready.go:57] node "default-k8s-diff-port-164377" has "Ready":"False" status (will retry)
	W1124 09:29:30.724219  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:33.226759  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:33.166158  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	W1124 09:29:35.169759  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	I1124 09:29:34.139080  326387 node_ready.go:49] node "default-k8s-diff-port-164377" is "Ready"
	I1124 09:29:34.139124  326387 node_ready.go:38] duration metric: took 11.50378844s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:29:34.139140  326387 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:29:34.139192  326387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:29:34.158537  326387 api_server.go:72] duration metric: took 11.922691746s to wait for apiserver process to appear ...
	I1124 09:29:34.158566  326387 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:29:34.158588  326387 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:29:34.167073  326387 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 09:29:34.168208  326387 api_server.go:141] control plane version: v1.34.2
	I1124 09:29:34.168234  326387 api_server.go:131] duration metric: took 9.659516ms to wait for apiserver health ...
	I1124 09:29:34.168244  326387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:29:34.172677  326387 system_pods.go:59] 8 kube-system pods found
	I1124 09:29:34.172739  326387 system_pods.go:61] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:34.172755  326387 system_pods.go:61] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running
	I1124 09:29:34.172764  326387 system_pods.go:61] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:29:34.172771  326387 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running
	I1124 09:29:34.172776  326387 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running
	I1124 09:29:34.172787  326387 system_pods.go:61] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:29:34.172800  326387 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running
	I1124 09:29:34.172821  326387 system_pods.go:61] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:34.172834  326387 system_pods.go:74] duration metric: took 4.582003ms to wait for pod list to return data ...
	I1124 09:29:34.172844  326387 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:29:34.175534  326387 default_sa.go:45] found service account: "default"
	I1124 09:29:34.175550  326387 default_sa.go:55] duration metric: took 2.700612ms for default service account to be created ...
	I1124 09:29:34.175561  326387 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:29:34.179181  326387 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:34.179211  326387 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:34.179219  326387 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running
	I1124 09:29:34.179226  326387 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:29:34.179232  326387 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running
	I1124 09:29:34.179237  326387 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running
	I1124 09:29:34.179242  326387 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:29:34.179247  326387 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running
	I1124 09:29:34.179254  326387 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:34.179275  326387 retry.go:31] will retry after 297.148701ms: missing components: kube-dns
	I1124 09:29:34.488122  326387 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:34.488167  326387 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:34.488175  326387 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running
	I1124 09:29:34.488184  326387 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:29:34.488190  326387 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running
	I1124 09:29:34.488196  326387 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running
	I1124 09:29:34.488203  326387 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:29:34.488208  326387 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running
	I1124 09:29:34.488215  326387 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:34.488232  326387 retry.go:31] will retry after 287.470129ms: missing components: kube-dns
	I1124 09:29:34.781622  326387 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:34.781657  326387 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:34.781666  326387 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running
	I1124 09:29:34.781674  326387 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:29:34.781680  326387 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running
	I1124 09:29:34.781685  326387 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running
	I1124 09:29:34.781690  326387 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:29:34.781698  326387 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running
	I1124 09:29:34.781712  326387 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:34.781731  326387 retry.go:31] will retry after 468.737219ms: missing components: kube-dns
	I1124 09:29:35.258570  326387 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:35.258605  326387 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Running
	I1124 09:29:35.258613  326387 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running
	I1124 09:29:35.258619  326387 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:29:35.258625  326387 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running
	I1124 09:29:35.258631  326387 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running
	I1124 09:29:35.258635  326387 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:29:35.258641  326387 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running
	I1124 09:29:35.258645  326387 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Running
	I1124 09:29:35.258654  326387 system_pods.go:126] duration metric: took 1.083086655s to wait for k8s-apps to be running ...
	I1124 09:29:35.258663  326387 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:29:35.258711  326387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:29:35.277987  326387 system_svc.go:56] duration metric: took 19.296897ms WaitForService to wait for kubelet
	I1124 09:29:35.278195  326387 kubeadm.go:587] duration metric: took 13.0423584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:29:35.278239  326387 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:29:35.282999  326387 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:29:35.283084  326387 node_conditions.go:123] node cpu capacity is 8
	I1124 09:29:35.283119  326387 node_conditions.go:105] duration metric: took 4.851131ms to run NodePressure ...
	I1124 09:29:35.283157  326387 start.go:242] waiting for startup goroutines ...
	I1124 09:29:35.283168  326387 start.go:247] waiting for cluster config update ...
	I1124 09:29:35.283183  326387 start.go:256] writing updated cluster config ...
	I1124 09:29:35.283487  326387 ssh_runner.go:195] Run: rm -f paused
	I1124 09:29:35.288615  326387 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:29:35.293905  326387 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.302326  326387 pod_ready.go:94] pod "coredns-66bc5c9577-gn9zx" is "Ready"
	I1124 09:29:35.302365  326387 pod_ready.go:86] duration metric: took 8.436531ms for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.305415  326387 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.312327  326387 pod_ready.go:94] pod "etcd-default-k8s-diff-port-164377" is "Ready"
	I1124 09:29:35.312364  326387 pod_ready.go:86] duration metric: took 6.924592ms for pod "etcd-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.314862  326387 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.318955  326387 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-164377" is "Ready"
	I1124 09:29:35.318978  326387 pod_ready.go:86] duration metric: took 4.093677ms for pod "kube-apiserver-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.321394  326387 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.692851  326387 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-164377" is "Ready"
	I1124 09:29:35.692882  326387 pod_ready.go:86] duration metric: took 371.463188ms for pod "kube-controller-manager-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.897066  326387 pod_ready.go:83] waiting for pod "kube-proxy-2vm2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:36.294824  326387 pod_ready.go:94] pod "kube-proxy-2vm2s" is "Ready"
	I1124 09:29:36.294849  326387 pod_ready.go:86] duration metric: took 397.755705ms for pod "kube-proxy-2vm2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:36.497305  326387 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:36.894816  326387 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-164377" is "Ready"
	I1124 09:29:36.894843  326387 pod_ready.go:86] duration metric: took 397.509298ms for pod "kube-scheduler-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:36.894960  326387 pod_ready.go:40] duration metric: took 1.606200462s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:29:36.961071  326387 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:29:36.967753  326387 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-164377" cluster and "default" namespace by default
	I1124 09:29:34.419549  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.24-0: (2.220878777s)
	I1124 09:29:34.419586  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 from cache
	I1124 09:29:34.419606  335638 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:29:34.419660  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:29:34.419673  335638 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.220841723s)
	I1124 09:29:34.419694  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 09:29:34.419716  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 09:29:36.150703  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.731017207s)
	I1124 09:29:36.150736  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 09:29:36.150757  335638 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:29:36.150802  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	W1124 09:29:35.731259  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:38.224668  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:37.665170  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	W1124 09:29:40.164504  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	I1124 09:29:38.528548  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (2.377720965s)
	I1124 09:29:38.528580  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 09:29:38.528605  335638 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:29:38.528676  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:29:39.961131  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.432426877s)
	I1124 09:29:39.961160  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 09:29:39.961186  335638 cache_images.go:125] Successfully loaded all cached images
	I1124 09:29:39.961193  335638 cache_images.go:94] duration metric: took 12.500231207s to LoadCachedImages
	I1124 09:29:39.961203  335638 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1124 09:29:39.961290  335638 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-639420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:29:39.961405  335638 ssh_runner.go:195] Run: crio config
	I1124 09:29:40.017556  335638 cni.go:84] Creating CNI manager for ""
	I1124 09:29:40.017574  335638 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:29:40.017588  335638 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 09:29:40.017611  335638 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-639420 NodeName:newest-cni-639420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:29:40.017725  335638 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-639420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:29:40.017780  335638 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:29:40.026708  335638 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1124 09:29:40.026770  335638 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:29:40.035553  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1124 09:29:40.035588  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:40.035605  335638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:29:40.035555  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1124 09:29:40.035670  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1124 09:29:40.035752  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1124 09:29:40.041028  335638 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1124 09:29:40.041055  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1124 09:29:40.054908  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1124 09:29:40.054907  335638 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1124 09:29:40.054990  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1124 09:29:40.071621  335638 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1124 09:29:40.071656  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1124 09:29:40.519484  335638 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:29:40.527951  335638 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1124 09:29:40.541490  335638 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:29:40.557833  335638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1124 09:29:40.571237  335638 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:29:40.576306  335638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:29:40.589883  335638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:40.678126  335638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:29:40.704903  335638 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420 for IP: 192.168.103.2
	I1124 09:29:40.704941  335638 certs.go:195] generating shared ca certs ...
	I1124 09:29:40.704959  335638 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.705124  335638 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:29:40.705180  335638 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:29:40.705192  335638 certs.go:257] generating profile certs ...
	I1124 09:29:40.705287  335638 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.key
	I1124 09:29:40.705309  335638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.crt with IP's: []
	I1124 09:29:40.775997  335638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.crt ...
	I1124 09:29:40.776027  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.crt: {Name:mkcf58d60ab21e3774368023568c4a98b624e7d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.776190  335638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.key ...
	I1124 09:29:40.776201  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.key: {Name:mka249965908f6ad2a4645fcec87590859e3d741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.776282  335638 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key.145b87e5
	I1124 09:29:40.776296  335638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt.145b87e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 09:29:40.890654  335638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt.145b87e5 ...
	I1124 09:29:40.890679  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt.145b87e5: {Name:mk2d89ff9289520c269c4447c1a2481a90ae6b20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.890829  335638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key.145b87e5 ...
	I1124 09:29:40.890844  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key.145b87e5: {Name:mkd105e7354007cf88b1a316f5e37bcbc13961b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.890931  335638 certs.go:382] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt.145b87e5 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt
	I1124 09:29:40.891002  335638 certs.go:386] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key.145b87e5 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key
	I1124 09:29:40.891063  335638 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.key
	I1124 09:29:40.891079  335638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.crt with IP's: []
	I1124 09:29:40.946849  335638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.crt ...
	I1124 09:29:40.946874  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.crt: {Name:mk36d387ce57fdba3e54ffc9476c2588e55a96b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.947020  335638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.key ...
	I1124 09:29:40.947040  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.key: {Name:mkff8f18af606cfd446ad97e56c96ba9f13e37da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.947218  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:29:40.947290  335638 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:29:40.947305  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:29:40.947343  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:29:40.947375  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:29:40.947398  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:29:40.947439  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:29:40.948103  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:29:40.967593  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:29:40.985798  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:29:41.005882  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:29:41.024614  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:29:41.042622  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:29:41.060169  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:29:41.077427  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:29:41.095055  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:29:41.115317  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:29:41.133179  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:29:41.150670  335638 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:29:41.163944  335638 ssh_runner.go:195] Run: openssl version
	I1124 09:29:41.170403  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:29:41.178881  335638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:41.182872  335638 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:41.182924  335638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:41.218574  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:29:41.228668  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:29:41.237539  335638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:29:41.241785  335638 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:29:41.241843  335638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:29:41.279712  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:29:41.289216  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:29:41.298273  335638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:29:41.303190  335638 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:29:41.303272  335638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:29:41.339223  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:29:41.349396  335638 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:29:41.353636  335638 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:29:41.353707  335638 kubeadm.go:401] StartCluster: {Name:newest-cni-639420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:29:41.353787  335638 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:29:41.353835  335638 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:29:41.385066  335638 cri.go:89] found id: ""
	I1124 09:29:41.385132  335638 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:29:41.394546  335638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:29:41.403899  335638 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:29:41.403955  335638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:29:41.412726  335638 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:29:41.412781  335638 kubeadm.go:158] found existing configuration files:
	
	I1124 09:29:41.412846  335638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:29:41.421489  335638 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:29:41.421553  335638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:29:41.429748  335638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:29:41.438109  335638 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:29:41.438163  335638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:29:41.445795  335638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:29:41.453942  335638 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:29:41.453999  335638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:29:41.462783  335638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:29:41.471339  335638 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:29:41.471392  335638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:29:41.479973  335638 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:29:41.593364  335638 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:29:41.656913  335638 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 09:29:40.224803  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:42.724388  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 09:29:34 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:34.308862812Z" level=info msg="Starting container: 4fcfe873f5251f30e2b2a1765a607ad43850a96b5789d921bad91483581cba7f" id=fad9d879-1709-48a0-b697-bdedee931ae0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:34 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:34.313660768Z" level=info msg="Started container" PID=1818 containerID=4fcfe873f5251f30e2b2a1765a607ad43850a96b5789d921bad91483581cba7f description=kube-system/coredns-66bc5c9577-gn9zx/coredns id=fad9d879-1709-48a0-b697-bdedee931ae0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=562bab0dd4525a4175f48515c53fff6506eb25c2c1b5074788ddee63bab8032e
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.494231841Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3a2d8ed1-f7b3-472a-8b95-32713a6663ea name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.494318911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.501020532Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ce49a4795a63699cc561be840d310c71ef644e2fdda57464c7d8abf33299e6aa UID:405ec516-207e-443a-b038-ac6f6da6efb1 NetNS:/var/run/netns/72a8cde7-15b1-47f5-b112-fd06f4e52ee1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128490}] Aliases:map[]}"
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.501078906Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.522416936Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ce49a4795a63699cc561be840d310c71ef644e2fdda57464c7d8abf33299e6aa UID:405ec516-207e-443a-b038-ac6f6da6efb1 NetNS:/var/run/netns/72a8cde7-15b1-47f5-b112-fd06f4e52ee1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128490}] Aliases:map[]}"
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.52259779Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.525697896Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.526869979Z" level=info msg="Ran pod sandbox ce49a4795a63699cc561be840d310c71ef644e2fdda57464c7d8abf33299e6aa with infra container: default/busybox/POD" id=3a2d8ed1-f7b3-472a-8b95-32713a6663ea name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.528272256Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=afb49b1e-df97-47f6-a0a3-eacfcfee96d8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.528450415Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=afb49b1e-df97-47f6-a0a3-eacfcfee96d8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.528503487Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=afb49b1e-df97-47f6-a0a3-eacfcfee96d8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.52964086Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3074af2-1df3-43da-8f5e-cdb95154bd8a name=/runtime.v1.ImageService/PullImage
	Nov 24 09:29:37 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:37.53160287Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:29:38 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:38.964824591Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=a3074af2-1df3-43da-8f5e-cdb95154bd8a name=/runtime.v1.ImageService/PullImage
	Nov 24 09:29:38 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:38.96567731Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b57f949e-57af-4db9-a2c6-25e36172d659 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:38 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:38.967216505Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e1f88e42-372c-4b30-8308-8891c17f67a1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:38 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:38.970924281Z" level=info msg="Creating container: default/busybox/busybox" id=109aa925-4441-4446-b576-63bf13e2973c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:38 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:38.971296148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:38 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:38.977143038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:38 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:38.977715769Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:39 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:39.020588379Z" level=info msg="Created container 904e57311c5430349e7e5516f678603b48f15b656d53e0778c712ba284b7fc18: default/busybox/busybox" id=109aa925-4441-4446-b576-63bf13e2973c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:39 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:39.021508797Z" level=info msg="Starting container: 904e57311c5430349e7e5516f678603b48f15b656d53e0778c712ba284b7fc18" id=124bfcc6-9ac0-47ca-a326-ee1515842817 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:39 default-k8s-diff-port-164377 crio[774]: time="2025-11-24T09:29:39.023925804Z" level=info msg="Started container" PID=1893 containerID=904e57311c5430349e7e5516f678603b48f15b656d53e0778c712ba284b7fc18 description=default/busybox/busybox id=124bfcc6-9ac0-47ca-a326-ee1515842817 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce49a4795a63699cc561be840d310c71ef644e2fdda57464c7d8abf33299e6aa
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	904e57311c543       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   ce49a4795a636       busybox                                                default
	4fcfe873f5251       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   562bab0dd4525       coredns-66bc5c9577-gn9zx                               kube-system
	28451e4cc4038       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   3f76cc245d3d6       storage-provisioner                                    kube-system
	c8c941b620811       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      24 seconds ago      Running             kube-proxy                0                   00495f77d65af       kube-proxy-2vm2s                                       kube-system
	2045d127010b2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   c73b0d158cc83       kindnet-kwvs7                                          kube-system
	c8388b83bf77f       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      34 seconds ago      Running             kube-controller-manager   0                   d08f1cc6abbf7       kube-controller-manager-default-k8s-diff-port-164377   kube-system
	c035c5f970c40       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   116ba1ff8ef66       etcd-default-k8s-diff-port-164377                      kube-system
	a3472212a766d       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      34 seconds ago      Running             kube-scheduler            0                   1e213651f956d       kube-scheduler-default-k8s-diff-port-164377            kube-system
	0eecabefa13f0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      34 seconds ago      Running             kube-apiserver            0                   81c0449230c3a       kube-apiserver-default-k8s-diff-port-164377            kube-system
	
	
	==> coredns [4fcfe873f5251f30e2b2a1765a607ad43850a96b5789d921bad91483581cba7f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33104 - 52346 "HINFO IN 5125204383910166605.7150118242431617123. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046968095s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-164377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-164377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=default-k8s-diff-port-164377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_29_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:29:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-164377
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:29:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:29:33 +0000   Mon, 24 Nov 2025 09:29:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:29:33 +0000   Mon, 24 Nov 2025 09:29:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:29:33 +0000   Mon, 24 Nov 2025 09:29:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:29:33 +0000   Mon, 24 Nov 2025 09:29:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-164377
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                89405ce7-5c63-4de3-9dc9-d223bdf4644b
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-gn9zx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-164377                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-kwvs7                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-164377             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-164377    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-2vm2s                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-164377             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-164377 event: Registered Node default-k8s-diff-port-164377 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-164377 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [c035c5f970c4053b8ec5ef79b2487c9ae766d2e1ea7c14c541263edf8d9ecd1a] <==
	{"level":"warn","ts":"2025-11-24T09:29:13.405696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.413157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.420982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.428694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.437681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.444060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.451966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.459736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.471535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.478436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.486890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.499786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.507963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.515536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.523438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.530298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.538112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.545227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.553060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.559867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.578717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.586903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.595280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:13.648815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58952","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T09:29:33.918317Z","caller":"traceutil/trace.go:172","msg":"trace[1028120408] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"152.659489ms","start":"2025-11-24T09:29:33.765630Z","end":"2025-11-24T09:29:33.918290Z","steps":["trace[1028120408] 'process raft request'  (duration: 68.31929ms)","trace[1028120408] 'compare'  (duration: 84.228272ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:29:47 up  1:12,  0 user,  load average: 4.33, 3.39, 2.20
	Linux default-k8s-diff-port-164377 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2045d127010b2b64364a054f68fb352481bdd66f24508bcf3fba662240dfd78e] <==
	I1124 09:29:23.018128       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:29:23.018510       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 09:29:23.018735       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:29:23.018760       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:29:23.018789       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:29:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:29:23.222947       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:29:23.222994       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:29:23.223007       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:29:23.223129       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:29:23.593911       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:29:23.593947       1 metrics.go:72] Registering metrics
	I1124 09:29:23.594035       1 controller.go:711] "Syncing nftables rules"
	I1124 09:29:33.223450       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:29:33.223521       1 main.go:301] handling current node
	I1124 09:29:43.223582       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:29:43.223621       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0eecabefa13f0f0bfae0dea97de59dbd8f0ee28d8082772a0bf5ab98ec4ed91c] <==
	I1124 09:29:14.212551       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:29:14.217259       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 09:29:14.217300       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1124 09:29:14.220698       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:29:14.221187       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 09:29:14.225404       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:29:14.225652       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:29:15.083403       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:29:15.087577       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:29:15.087600       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:29:15.661200       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:29:15.716202       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:29:15.790103       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:29:15.796890       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 09:29:15.797945       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:29:15.802480       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:29:16.198552       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:29:16.782711       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:29:16.811115       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:29:16.823844       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:29:21.451681       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:29:21.856498       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:29:21.860651       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:29:22.353552       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 09:29:45.296779       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:58040: use of closed network connection
	
	
	==> kube-controller-manager [c8388b83bf77f0f2ad1baa4560e73e7718db1ac83de6175fc2200d82ea1d036e] <==
	I1124 09:29:21.158881       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:29:21.158900       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 09:29:21.158907       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 09:29:21.163642       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:29:21.185256       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:29:21.196517       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:29:21.196859       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:29:21.196039       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 09:29:21.196972       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 09:29:21.198082       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 09:29:21.198123       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:29:21.198204       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 09:29:21.199848       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 09:29:21.201667       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:29:21.201779       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:29:21.201976       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:29:21.202528       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:29:21.202833       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 09:29:21.205062       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 09:29:21.206711       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 09:29:21.207969       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:29:21.208149       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 09:29:21.210853       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 09:29:21.222557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:29:36.148437       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c8c941b6208115ac9101eb96ebc2b2793f0b88aaee6b64dac094219197afed76] <==
	I1124 09:29:22.828350       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:29:22.903402       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:29:23.004119       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:29:23.004186       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 09:29:23.005522       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:29:23.028184       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:29:23.028253       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:29:23.034874       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:29:23.035317       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:29:23.035385       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:29:23.037484       1 config.go:200] "Starting service config controller"
	I1124 09:29:23.037515       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:29:23.037489       1 config.go:309] "Starting node config controller"
	I1124 09:29:23.037497       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:29:23.037547       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:29:23.037567       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:29:23.037548       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:29:23.037579       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:29:23.037596       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:29:23.137681       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:29:23.137721       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:29:23.137903       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a3472212a766dfdb2e39d31a9b7cf7e85c7e74cd249328c26a854100f41f5fd9] <==
	E1124 09:29:14.206210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:29:14.206418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:29:14.206531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:29:14.206607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:29:14.207803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:29:14.207862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:29:14.208301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:29:14.208631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:29:14.208686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:29:14.208796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:29:14.209173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:29:14.209241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:29:14.209351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:29:14.209722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:29:14.209999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:29:14.210257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:29:15.104687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:29:15.138072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 09:29:15.177610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:29:15.196132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:29:15.239864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:29:15.416673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:29:15.438734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:29:15.447243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1124 09:29:16.800929       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:29:17 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:17.888937    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-164377" podStartSLOduration=1.8889169780000001 podStartE2EDuration="1.888916978s" podCreationTimestamp="2025-11-24 09:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:17.873017355 +0000 UTC m=+1.264276651" watchObservedRunningTime="2025-11-24 09:29:17.888916978 +0000 UTC m=+1.280176280"
	Nov 24 09:29:17 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:17.893770    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-164377" podStartSLOduration=1.893750851 podStartE2EDuration="1.893750851s" podCreationTimestamp="2025-11-24 09:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:17.888636885 +0000 UTC m=+1.279896208" watchObservedRunningTime="2025-11-24 09:29:17.893750851 +0000 UTC m=+1.285010156"
	Nov 24 09:29:17 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:17.941088    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-164377" podStartSLOduration=1.941067904 podStartE2EDuration="1.941067904s" podCreationTimestamp="2025-11-24 09:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:17.922736936 +0000 UTC m=+1.313996241" watchObservedRunningTime="2025-11-24 09:29:17.941067904 +0000 UTC m=+1.332327209"
	Nov 24 09:29:17 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:17.959090    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-164377" podStartSLOduration=1.959049608 podStartE2EDuration="1.959049608s" podCreationTimestamp="2025-11-24 09:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:17.941842064 +0000 UTC m=+1.333101370" watchObservedRunningTime="2025-11-24 09:29:17.959049608 +0000 UTC m=+1.350308912"
	Nov 24 09:29:21 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:21.185018    1310 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 09:29:21 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:21.185787    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:29:22 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:22.493373    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/137008cc-e397-4752-952e-f66903bce62a-kube-proxy\") pod \"kube-proxy-2vm2s\" (UID: \"137008cc-e397-4752-952e-f66903bce62a\") " pod="kube-system/kube-proxy-2vm2s"
	Nov 24 09:29:22 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:22.493420    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/137008cc-e397-4752-952e-f66903bce62a-xtables-lock\") pod \"kube-proxy-2vm2s\" (UID: \"137008cc-e397-4752-952e-f66903bce62a\") " pod="kube-system/kube-proxy-2vm2s"
	Nov 24 09:29:22 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:22.493446    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pzgg\" (UniqueName: \"kubernetes.io/projected/137008cc-e397-4752-952e-f66903bce62a-kube-api-access-7pzgg\") pod \"kube-proxy-2vm2s\" (UID: \"137008cc-e397-4752-952e-f66903bce62a\") " pod="kube-system/kube-proxy-2vm2s"
	Nov 24 09:29:22 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:22.493470    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07-cni-cfg\") pod \"kindnet-kwvs7\" (UID: \"1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07\") " pod="kube-system/kindnet-kwvs7"
	Nov 24 09:29:22 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:22.493491    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k4f9\" (UniqueName: \"kubernetes.io/projected/1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07-kube-api-access-4k4f9\") pod \"kindnet-kwvs7\" (UID: \"1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07\") " pod="kube-system/kindnet-kwvs7"
	Nov 24 09:29:22 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:22.493515    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/137008cc-e397-4752-952e-f66903bce62a-lib-modules\") pod \"kube-proxy-2vm2s\" (UID: \"137008cc-e397-4752-952e-f66903bce62a\") " pod="kube-system/kube-proxy-2vm2s"
	Nov 24 09:29:22 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:22.493532    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07-xtables-lock\") pod \"kindnet-kwvs7\" (UID: \"1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07\") " pod="kube-system/kindnet-kwvs7"
	Nov 24 09:29:22 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:22.493563    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07-lib-modules\") pod \"kindnet-kwvs7\" (UID: \"1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07\") " pod="kube-system/kindnet-kwvs7"
	Nov 24 09:29:22 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:22.855758    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2vm2s" podStartSLOduration=0.855736277 podStartE2EDuration="855.736277ms" podCreationTimestamp="2025-11-24 09:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:22.854559388 +0000 UTC m=+6.245818693" watchObservedRunningTime="2025-11-24 09:29:22.855736277 +0000 UTC m=+6.246995582"
	Nov 24 09:29:24 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:24.145853    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kwvs7" podStartSLOduration=2.145833448 podStartE2EDuration="2.145833448s" podCreationTimestamp="2025-11-24 09:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:22.869047366 +0000 UTC m=+6.260306671" watchObservedRunningTime="2025-11-24 09:29:24.145833448 +0000 UTC m=+7.537092753"
	Nov 24 09:29:33 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:33.703916    1310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 09:29:33 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:33.974321    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnhmh\" (UniqueName: \"kubernetes.io/projected/829aa957-d18b-4e5d-b3ae-dca550b9db5d-kube-api-access-jnhmh\") pod \"storage-provisioner\" (UID: \"829aa957-d18b-4e5d-b3ae-dca550b9db5d\") " pod="kube-system/storage-provisioner"
	Nov 24 09:29:33 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:33.974406    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4debacc-a7df-4bc4-9d87-249d44299f91-config-volume\") pod \"coredns-66bc5c9577-gn9zx\" (UID: \"d4debacc-a7df-4bc4-9d87-249d44299f91\") " pod="kube-system/coredns-66bc5c9577-gn9zx"
	Nov 24 09:29:33 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:33.974442    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9cjh\" (UniqueName: \"kubernetes.io/projected/d4debacc-a7df-4bc4-9d87-249d44299f91-kube-api-access-h9cjh\") pod \"coredns-66bc5c9577-gn9zx\" (UID: \"d4debacc-a7df-4bc4-9d87-249d44299f91\") " pod="kube-system/coredns-66bc5c9577-gn9zx"
	Nov 24 09:29:33 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:33.974531    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/829aa957-d18b-4e5d-b3ae-dca550b9db5d-tmp\") pod \"storage-provisioner\" (UID: \"829aa957-d18b-4e5d-b3ae-dca550b9db5d\") " pod="kube-system/storage-provisioner"
	Nov 24 09:29:34 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:34.909791    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.909767322 podStartE2EDuration="12.909767322s" podCreationTimestamp="2025-11-24 09:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:34.909715673 +0000 UTC m=+18.300974978" watchObservedRunningTime="2025-11-24 09:29:34.909767322 +0000 UTC m=+18.301026626"
	Nov 24 09:29:34 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:34.910119    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gn9zx" podStartSLOduration=12.910105441 podStartE2EDuration="12.910105441s" podCreationTimestamp="2025-11-24 09:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:34.895249882 +0000 UTC m=+18.286509187" watchObservedRunningTime="2025-11-24 09:29:34.910105441 +0000 UTC m=+18.301364745"
	Nov 24 09:29:37 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:37.298598    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtkm\" (UniqueName: \"kubernetes.io/projected/405ec516-207e-443a-b038-ac6f6da6efb1-kube-api-access-kmtkm\") pod \"busybox\" (UID: \"405ec516-207e-443a-b038-ac6f6da6efb1\") " pod="default/busybox"
	Nov 24 09:29:39 default-k8s-diff-port-164377 kubelet[1310]: I1124 09:29:39.951853    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.514009997 podStartE2EDuration="2.951831815s" podCreationTimestamp="2025-11-24 09:29:37 +0000 UTC" firstStartedPulling="2025-11-24 09:29:37.528835826 +0000 UTC m=+20.920095111" lastFinishedPulling="2025-11-24 09:29:38.966657631 +0000 UTC m=+22.357916929" observedRunningTime="2025-11-24 09:29:39.951288408 +0000 UTC m=+23.342547703" watchObservedRunningTime="2025-11-24 09:29:39.951831815 +0000 UTC m=+23.343091120"
	
	
	==> storage-provisioner [28451e4cc4038ba6f62a11199402fd18f98a8ce019899ea3a1ed180fcc9cfec9] <==
	I1124 09:29:34.331454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:29:34.348734       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:29:34.348795       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:29:34.353429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:34.363049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:29:34.363831       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:29:34.364062       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c66cc1a2-9dbe-4e90-b04e-0717d7b6501e", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-164377_86d5a565-ce02-4f93-9f25-849812696227 became leader
	I1124 09:29:34.364246       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-164377_86d5a565-ce02-4f93-9f25-849812696227!
	W1124 09:29:34.369501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:34.376776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:29:34.464844       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-164377_86d5a565-ce02-4f93-9f25-849812696227!
	W1124 09:29:36.381154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:36.387003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:38.390588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:38.425006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:40.428473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:40.435517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:42.439744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:42.445163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:44.449250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:44.453311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:46.457322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:29:46.462496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-164377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (314.50252ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:29:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-639420
helpers_test.go:243: (dbg) docker inspect newest-cni-639420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99",
	        "Created": "2025-11-24T09:29:23.35779578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 336602,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:29:23.390324366Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/hostname",
	        "HostsPath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/hosts",
	        "LogPath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99-json.log",
	        "Name": "/newest-cni-639420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-639420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-639420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99",
	                "LowerDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-639420",
	                "Source": "/var/lib/docker/volumes/newest-cni-639420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-639420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-639420",
	                "name.minikube.sigs.k8s.io": "newest-cni-639420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "edcff2e2b22705ed48815a7cee2849efe1f10e0d02d1c82d6900375d570fa738",
	            "SandboxKey": "/var/run/docker/netns/edcff2e2b227",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-639420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb5ecbfd413335d9913854ce166d0ab6940e67ee6eb0c6e4edd097241e0aa654",
	                    "EndpointID": "50d3bc2d95d1bb4ae59e8bff59907a72168c2bb680f93d2a1f3b67012a4b83c0",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "0a:72:39:8e:8a:63",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-639420",
	                        "71986ab5f5c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639420 -n newest-cni-639420
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-639420 logs -n 25
E1124 09:29:56.413213    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-949664 sudo systemctl cat containerd --no-pager                                                                                                                                                                                            │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                     │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-767267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ ssh     │ -p bridge-949664 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo containerd config dump                                                                                                                                                                                                         │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                  │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo systemctl cat crio --no-pager                                                                                                                                                                                                  │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo crio config                                                                                                                                                                                                                    │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ delete  │ -p bridge-949664                                                                                                                                                                                                                                     │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ stop    │ -p old-k8s-version-767267 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p no-preload-938348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ stop    │ -p no-preload-938348 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-767267 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p no-preload-938348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-967467                                                                                                                                                                                                                         │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-164377 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:29:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:29:22.272630  335638 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:29:22.272773  335638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:29:22.272780  335638 out.go:374] Setting ErrFile to fd 2...
	I1124 09:29:22.272786  335638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:29:22.273086  335638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:29:22.273711  335638 out.go:368] Setting JSON to false
	I1124 09:29:22.275349  335638 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4308,"bootTime":1763972254,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:29:22.275434  335638 start.go:143] virtualization: kvm guest
	I1124 09:29:22.277385  335638 out.go:179] * [newest-cni-639420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:29:22.280788  335638 notify.go:221] Checking for updates...
	I1124 09:29:22.281556  335638 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:29:22.285973  335638 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:29:22.290210  335638 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:29:22.295020  335638 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:29:22.296440  335638 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:29:22.297660  335638 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:29:22.300201  335638 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:29:22.300438  335638 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:29:22.300608  335638 config.go:182] Loaded profile config "old-k8s-version-767267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 09:29:22.300773  335638 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:29:22.338416  335638 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:29:22.338809  335638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:29:22.436220  335638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 09:29:22.419440868 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:29:22.436383  335638 docker.go:319] overlay module found
	I1124 09:29:22.438008  335638 out.go:179] * Using the docker driver based on user configuration
	I1124 09:29:22.439426  335638 start.go:309] selected driver: docker
	I1124 09:29:22.439445  335638 start.go:927] validating driver "docker" against <nil>
	I1124 09:29:22.439461  335638 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:29:22.440211  335638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:29:22.545267  335638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 09:29:22.530190875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:29:22.545483  335638 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1124 09:29:22.545524  335638 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1124 09:29:22.545801  335638 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 09:29:22.547822  335638 out.go:179] * Using Docker driver with root privileges
	I1124 09:29:22.550027  335638 cni.go:84] Creating CNI manager for ""
	I1124 09:29:22.550124  335638 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:29:22.550136  335638 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:29:22.550237  335638 start.go:353] cluster config:
	{Name:newest-cni-639420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:29:22.552057  335638 out.go:179] * Starting "newest-cni-639420" primary control-plane node in "newest-cni-639420" cluster
	I1124 09:29:22.553907  335638 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:29:22.556029  335638 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:29:22.557293  335638 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:29:22.557467  335638 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	W1124 09:29:22.584375  335638 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1124 09:29:22.588704  335638 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:29:22.588745  335638 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:29:22.602625  335638 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1124 09:29:22.602854  335638 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/config.json ...
	I1124 09:29:22.602894  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/config.json: {Name:mk7ccc8de387c8d9d793f2cc19c6bdd452036813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:22.603077  335638 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:29:22.603108  335638 start.go:360] acquireMachinesLock for newest-cni-639420: {Name:mka282f4f1046f315e8564ac5db60bb2850ef5e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:22.603114  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:22.603166  335638 start.go:364] duration metric: took 43.146µs to acquireMachinesLock for "newest-cni-639420"
	I1124 09:29:22.603190  335638 start.go:93] Provisioning new machine with config: &{Name:newest-cni-639420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:29:22.603284  335638 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:29:22.266382  326387 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:29:22.266403  326387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:29:22.266467  326387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:29:22.267204  326387 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-164377"
	I1124 09:29:22.267247  326387 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:29:22.267721  326387 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:29:22.305659  326387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:29:22.309583  326387 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:29:22.309602  326387 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:29:22.309658  326387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:29:22.340814  326387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:29:22.361110  326387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:29:22.421701  326387 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:29:22.451252  326387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:29:22.514208  326387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:29:22.634103  326387 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 09:29:22.635287  326387 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:29:22.897508  326387 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:29:22.127319  333403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:29:22.127371  333403 machine.go:97] duration metric: took 4.259939775s to provisionDockerMachine
	I1124 09:29:22.127386  333403 start.go:293] postStartSetup for "no-preload-938348" (driver="docker")
	I1124 09:29:22.127401  333403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:29:22.127467  333403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:29:22.127527  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:22.150125  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:22.262641  333403 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:29:22.267327  333403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:29:22.267426  333403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:29:22.267438  333403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:29:22.267509  333403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:29:22.267627  333403 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:29:22.267738  333403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:29:22.279630  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:29:22.318304  333403 start.go:296] duration metric: took 190.884017ms for postStartSetup
	I1124 09:29:22.318392  333403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:29:22.318441  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:22.349125  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:22.491751  333403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:29:22.499171  333403 fix.go:56] duration metric: took 5.297394876s for fixHost
	I1124 09:29:22.499196  333403 start.go:83] releasing machines lock for "no-preload-938348", held for 5.297447121s
	I1124 09:29:22.499275  333403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-938348
	I1124 09:29:22.528437  333403 ssh_runner.go:195] Run: cat /version.json
	I1124 09:29:22.528502  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:22.528696  333403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:29:22.528785  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:22.556869  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:22.557396  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:22.749108  333403 ssh_runner.go:195] Run: systemctl --version
	I1124 09:29:22.761160  333403 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:29:22.810862  333403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:29:22.820245  333403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:29:22.820398  333403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:29:22.832582  333403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:29:22.832607  333403 start.go:496] detecting cgroup driver to use...
	I1124 09:29:22.832638  333403 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:29:22.832680  333403 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:29:22.856642  333403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:29:22.878589  333403 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:29:22.878700  333403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:29:22.898010  333403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:29:22.913490  333403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:29:23.015325  333403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:29:23.109378  333403 docker.go:234] disabling docker service ...
	I1124 09:29:23.109472  333403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:29:23.131263  333403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:29:23.146998  333403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:29:23.252976  333403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:29:23.357243  333403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:29:23.370932  333403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:29:23.386424  333403 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:23.534936  333403 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:29:23.535002  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.545781  333403 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:29:23.545842  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.556112  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.566045  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.575401  333403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:29:23.584608  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.594614  333403 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.604212  333403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:23.613396  333403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:29:23.622174  333403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:29:23.629636  333403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:23.729103  333403 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:29:23.879968  333403 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:29:23.880049  333403 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:29:23.884669  333403 start.go:564] Will wait 60s for crictl version
	I1124 09:29:23.884720  333403 ssh_runner.go:195] Run: which crictl
	I1124 09:29:23.889584  333403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:29:23.920505  333403 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:29:23.920586  333403 ssh_runner.go:195] Run: crio --version
	I1124 09:29:23.955234  333403 ssh_runner.go:195] Run: crio --version
	I1124 09:29:24.004516  333403 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:29:22.899175  326387 addons.go:530] duration metric: took 663.317847ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:29:23.139102  326387 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-164377" context rescaled to 1 replicas
	I1124 09:29:20.654404  330481 addons.go:530] duration metric: took 3.007173003s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1124 09:29:20.655727  330481 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1124 09:29:20.655749  330481 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1124 09:29:21.151464  330481 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:29:21.156548  330481 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 09:29:21.157870  330481 api_server.go:141] control plane version: v1.28.0
	I1124 09:29:21.157899  330481 api_server.go:131] duration metric: took 507.328952ms to wait for apiserver health ...
	I1124 09:29:21.157911  330481 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:29:21.163672  330481 system_pods.go:59] 8 kube-system pods found
	I1124 09:29:21.163710  330481 system_pods.go:61] "coredns-5dd5756b68-gmgwv" [fa53b4e5-62ed-42ac-82be-5f220cd9ab0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:21.163721  330481 system_pods.go:61] "etcd-old-k8s-version-767267" [aff80338-4222-4ee0-990e-71d85ab84883] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:29:21.163730  330481 system_pods.go:61] "kindnet-8tdrm" [de72ff2b-7361-460c-b1e8-288fb9a6eb03] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:29:21.163739  330481 system_pods.go:61] "kube-apiserver-old-k8s-version-767267" [4af980c6-66d6-4b78-86a1-e0560b86f196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:29:21.163753  330481 system_pods.go:61] "kube-controller-manager-old-k8s-version-767267" [ad989491-57c5-4844-9a39-61df766e8110] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:29:21.163763  330481 system_pods.go:61] "kube-proxy-b8kgc" [318115cc-de22-4a55-a7aa-2acc886827d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:29:21.163773  330481 system_pods.go:61] "kube-scheduler-old-k8s-version-767267" [d6f9519e-96af-4db1-855c-b4ac6e09c533] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:29:21.163783  330481 system_pods.go:61] "storage-provisioner" [6347c3c7-cb5b-42ab-abb8-9ca37af285b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:21.163797  330481 system_pods.go:74] duration metric: took 5.878829ms to wait for pod list to return data ...
	I1124 09:29:21.163809  330481 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:29:21.167221  330481 default_sa.go:45] found service account: "default"
	I1124 09:29:21.167291  330481 default_sa.go:55] duration metric: took 3.474305ms for default service account to be created ...
	I1124 09:29:21.167308  330481 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:29:21.171183  330481 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:21.171207  330481 system_pods.go:89] "coredns-5dd5756b68-gmgwv" [fa53b4e5-62ed-42ac-82be-5f220cd9ab0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:21.171230  330481 system_pods.go:89] "etcd-old-k8s-version-767267" [aff80338-4222-4ee0-990e-71d85ab84883] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:29:21.171241  330481 system_pods.go:89] "kindnet-8tdrm" [de72ff2b-7361-460c-b1e8-288fb9a6eb03] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:29:21.171254  330481 system_pods.go:89] "kube-apiserver-old-k8s-version-767267" [4af980c6-66d6-4b78-86a1-e0560b86f196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:29:21.171263  330481 system_pods.go:89] "kube-controller-manager-old-k8s-version-767267" [ad989491-57c5-4844-9a39-61df766e8110] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:29:21.171275  330481 system_pods.go:89] "kube-proxy-b8kgc" [318115cc-de22-4a55-a7aa-2acc886827d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:29:21.171285  330481 system_pods.go:89] "kube-scheduler-old-k8s-version-767267" [d6f9519e-96af-4db1-855c-b4ac6e09c533] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:29:21.171291  330481 system_pods.go:89] "storage-provisioner" [6347c3c7-cb5b-42ab-abb8-9ca37af285b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:21.171300  330481 system_pods.go:126] duration metric: took 3.986426ms to wait for k8s-apps to be running ...
	I1124 09:29:21.171310  330481 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:29:21.171387  330481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:29:21.203837  330481 system_svc.go:56] duration metric: took 32.519797ms WaitForService to wait for kubelet
	I1124 09:29:21.203864  330481 kubeadm.go:587] duration metric: took 3.55738421s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:29:21.203880  330481 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:29:21.207017  330481 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:29:21.207050  330481 node_conditions.go:123] node cpu capacity is 8
	I1124 09:29:21.207067  330481 node_conditions.go:105] duration metric: took 3.18201ms to run NodePressure ...
	I1124 09:29:21.207081  330481 start.go:242] waiting for startup goroutines ...
	I1124 09:29:21.207090  330481 start.go:247] waiting for cluster config update ...
	I1124 09:29:21.207102  330481 start.go:256] writing updated cluster config ...
	I1124 09:29:21.207440  330481 ssh_runner.go:195] Run: rm -f paused
	I1124 09:29:21.212241  330481 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:29:21.218052  330481 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gmgwv" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:29:23.225315  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	I1124 09:29:24.005626  333403 cli_runner.go:164] Run: docker network inspect no-preload-938348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:29:24.029008  333403 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 09:29:24.034021  333403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:29:24.047095  333403 kubeadm.go:884] updating cluster {Name:no-preload-938348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:29:24.047302  333403 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:24.218154  333403 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:24.389851  333403 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:24.541160  333403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:29:24.541226  333403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:29:24.578918  333403 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:29:24.578939  333403 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:29:24.578949  333403 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1124 09:29:24.579051  333403 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-938348 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:29:24.579135  333403 ssh_runner.go:195] Run: crio config
	I1124 09:29:24.624916  333403 cni.go:84] Creating CNI manager for ""
	I1124 09:29:24.624945  333403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:29:24.624965  333403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:29:24.624998  333403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-938348 NodeName:no-preload-938348 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:29:24.625197  333403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-938348"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:29:24.625264  333403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:29:24.634370  333403 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:29:24.634419  333403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:29:24.642798  333403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:29:24.656549  333403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:29:24.669132  333403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1124 09:29:24.681839  333403 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:29:24.685751  333403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:29:24.695690  333403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:24.778867  333403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:29:24.806996  333403 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348 for IP: 192.168.94.2
	I1124 09:29:24.807015  333403 certs.go:195] generating shared ca certs ...
	I1124 09:29:24.807035  333403 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:24.807182  333403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:29:24.807254  333403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:29:24.807267  333403 certs.go:257] generating profile certs ...
	I1124 09:29:24.807411  333403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/client.key
	I1124 09:29:24.807497  333403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.key.64ae9983
	I1124 09:29:24.807556  333403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.key
	I1124 09:29:24.807691  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:29:24.807735  333403 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:29:24.807749  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:29:24.807783  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:29:24.807819  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:29:24.807858  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:29:24.807920  333403 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:29:24.808541  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:29:24.827500  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:29:24.848541  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:29:24.872286  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:29:24.896028  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:29:24.917352  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:29:24.934258  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:29:24.951088  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/no-preload-938348/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:29:24.967701  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:29:24.984489  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:29:25.002482  333403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:29:25.020506  333403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:29:25.033647  333403 ssh_runner.go:195] Run: openssl version
	I1124 09:29:25.039835  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:29:25.048276  333403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:29:25.051920  333403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:29:25.051971  333403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:29:25.089557  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:29:25.098198  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:29:25.107138  333403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:25.111148  333403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:25.111218  333403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:25.149419  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:29:25.158007  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:29:25.166936  333403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:29:25.170800  333403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:29:25.170846  333403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:29:25.206573  333403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:29:25.214835  333403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:29:25.218635  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:29:25.254289  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:29:25.291772  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:29:25.340024  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:29:25.389077  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:29:25.438716  333403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:29:25.497608  333403 kubeadm.go:401] StartCluster: {Name:no-preload-938348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-938348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:29:25.497724  333403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:29:25.497778  333403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:29:25.534464  333403 cri.go:89] found id: "3ad41a5ac915a2420a94ca88b9c3279566a6e896889754dc508c89ee3c9211e9"
	I1124 09:29:25.534487  333403 cri.go:89] found id: "36d1fad8848862ea43c7b05032173e3e3b7f0933dc08295c02778fb4b025a652"
	I1124 09:29:25.534493  333403 cri.go:89] found id: "a9fb0f7c0718dd8bc54d167231997e0c85b183e6aa45ef9d18e4350114c5d548"
	I1124 09:29:25.534513  333403 cri.go:89] found id: "bfa3e672acac8938f9e806c8ee2b3dfe80d66448b24724b3bbf29f8c10551751"
	I1124 09:29:25.534518  333403 cri.go:89] found id: ""
	I1124 09:29:25.534562  333403 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:29:25.547091  333403 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:29:25Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:29:25.547156  333403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:29:25.557888  333403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:29:25.557906  333403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:29:25.557949  333403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:29:25.565397  333403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:29:25.566348  333403 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-938348" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:29:25.566811  333403 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-938348" cluster setting kubeconfig missing "no-preload-938348" context setting]
	I1124 09:29:25.567631  333403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:25.569405  333403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:29:25.577075  333403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 09:29:25.577105  333403 kubeadm.go:602] duration metric: took 19.193886ms to restartPrimaryControlPlane
	I1124 09:29:25.577114  333403 kubeadm.go:403] duration metric: took 79.517412ms to StartCluster
	I1124 09:29:25.577130  333403 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:25.577190  333403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:29:25.578596  333403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:25.578833  333403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:29:25.578891  333403 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:29:25.578988  333403 addons.go:70] Setting storage-provisioner=true in profile "no-preload-938348"
	I1124 09:29:25.579008  333403 addons.go:239] Setting addon storage-provisioner=true in "no-preload-938348"
	W1124 09:29:25.579016  333403 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:29:25.579013  333403 addons.go:70] Setting dashboard=true in profile "no-preload-938348"
	I1124 09:29:25.579038  333403 addons.go:239] Setting addon dashboard=true in "no-preload-938348"
	I1124 09:29:25.579043  333403 host.go:66] Checking if "no-preload-938348" exists ...
	W1124 09:29:25.579048  333403 addons.go:248] addon dashboard should already be in state true
	I1124 09:29:25.579047  333403 addons.go:70] Setting default-storageclass=true in profile "no-preload-938348"
	I1124 09:29:25.579077  333403 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-938348"
	I1124 09:29:25.579079  333403 host.go:66] Checking if "no-preload-938348" exists ...
	I1124 09:29:25.579056  333403 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:29:25.579415  333403 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:29:25.579572  333403 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:29:25.579577  333403 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:29:25.581476  333403 out.go:179] * Verifying Kubernetes components...
	I1124 09:29:25.582938  333403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:25.614256  333403 addons.go:239] Setting addon default-storageclass=true in "no-preload-938348"
	W1124 09:29:25.614279  333403 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:29:25.614305  333403 host.go:66] Checking if "no-preload-938348" exists ...
	I1124 09:29:25.614804  333403 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:29:25.617177  333403 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:25.617869  333403 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:29:25.618562  333403 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:29:25.618581  333403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:29:25.618636  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:25.620039  333403 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:29:22.606204  335638 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:29:22.606500  335638 start.go:159] libmachine.API.Create for "newest-cni-639420" (driver="docker")
	I1124 09:29:22.606540  335638 client.go:173] LocalClient.Create starting
	I1124 09:29:22.606611  335638 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem
	I1124 09:29:22.606649  335638 main.go:143] libmachine: Decoding PEM data...
	I1124 09:29:22.606672  335638 main.go:143] libmachine: Parsing certificate...
	I1124 09:29:22.606733  335638 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem
	I1124 09:29:22.606768  335638 main.go:143] libmachine: Decoding PEM data...
	I1124 09:29:22.606791  335638 main.go:143] libmachine: Parsing certificate...
	I1124 09:29:22.607211  335638 cli_runner.go:164] Run: docker network inspect newest-cni-639420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:29:22.635929  335638 cli_runner.go:211] docker network inspect newest-cni-639420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:29:22.636004  335638 network_create.go:284] running [docker network inspect newest-cni-639420] to gather additional debugging logs...
	I1124 09:29:22.636023  335638 cli_runner.go:164] Run: docker network inspect newest-cni-639420
	W1124 09:29:22.660145  335638 cli_runner.go:211] docker network inspect newest-cni-639420 returned with exit code 1
	I1124 09:29:22.660178  335638 network_create.go:287] error running [docker network inspect newest-cni-639420]: docker network inspect newest-cni-639420: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-639420 not found
	I1124 09:29:22.660195  335638 network_create.go:289] output of [docker network inspect newest-cni-639420]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-639420 not found
	
	** /stderr **
	I1124 09:29:22.660349  335638 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:29:22.692502  335638 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2543a3a5b30f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:09:61:f4:32:5e} reservation:<nil>}
	I1124 09:29:22.696047  335638 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c977c796f084 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:34:cc:6d:f9:2b} reservation:<nil>}
	I1124 09:29:22.697241  335638 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2994a163bb80 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:ca:61:f0:c2:2e} reservation:<nil>}
	I1124 09:29:22.698001  335638 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49a891848d14 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:26:80:16:6d:29} reservation:<nil>}
	I1124 09:29:22.699090  335638 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c1e006301495 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:d2:93:0f:4e:2a:a4} reservation:<nil>}
	I1124 09:29:22.699942  335638 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-3f03f3b5e2bf IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:d5:81:14:0a:58} reservation:<nil>}
	I1124 09:29:22.701166  335638 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4e360}
	I1124 09:29:22.701203  335638 network_create.go:124] attempt to create docker network newest-cni-639420 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 09:29:22.701253  335638 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-639420 newest-cni-639420
	I1124 09:29:22.765511  335638 network_create.go:108] docker network newest-cni-639420 192.168.103.0/24 created
	I1124 09:29:22.765544  335638 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-639420" container
	I1124 09:29:22.765607  335638 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:29:22.785985  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:22.791490  335638 cli_runner.go:164] Run: docker volume create newest-cni-639420 --label name.minikube.sigs.k8s.io=newest-cni-639420 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:29:22.815309  335638 oci.go:103] Successfully created a docker volume newest-cni-639420
	I1124 09:29:22.815447  335638 cli_runner.go:164] Run: docker run --rm --name newest-cni-639420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-639420 --entrypoint /usr/bin/test -v newest-cni-639420:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:29:22.965281  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:23.125493  335638 cache.go:107] acquiring lock: {Name:mk50e8a993397cfd35eb04bbf3ec3f2f16922e03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.125590  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:29:23.125599  335638 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 126.945µs
	I1124 09:29:23.125612  335638 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:29:23.125628  335638 cache.go:107] acquiring lock: {Name:mk44ea28b5ef083e518e10f8b09fe20e117fa612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.125665  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:29:23.125672  335638 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 45.529µs
	I1124 09:29:23.125680  335638 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:29:23.125696  335638 cache.go:107] acquiring lock: {Name:mk22cdf247cbd1eba82607ef17480dc2601681cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.125763  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:29:23.125776  335638 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 82.084µs
	I1124 09:29:23.125785  335638 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:29:23.125801  335638 cache.go:107] acquiring lock: {Name:mkbf0dee95f0ab47974350aecf97d10e64a67897 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.125892  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:29:23.125899  335638 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 101.39µs
	I1124 09:29:23.125908  335638 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:29:23.125921  335638 cache.go:107] acquiring lock: {Name:mk02678e83bd0bc783689569fa5806aa92d36dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.126065  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:29:23.126072  335638 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 153.418µs
	I1124 09:29:23.126079  335638 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:29:23.126093  335638 cache.go:107] acquiring lock: {Name:mk4b39f728589920114b6f2c68f5093e514fadca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.126142  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:29:23.126148  335638 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 58.094µs
	I1124 09:29:23.126158  335638 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:29:23.126171  335638 cache.go:107] acquiring lock: {Name:mk7db92c93cf19a2f7751497e327ce09d843bbd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.126202  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:29:23.126208  335638 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 39.873µs
	I1124 09:29:23.126215  335638 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:29:23.126227  335638 cache.go:107] acquiring lock: {Name:mk690ae61adbe621ac8f3906853ffca5c6beb812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:29:23.126265  335638 cache.go:115] /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:29:23.126270  335638 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 46.169µs
	I1124 09:29:23.126277  335638 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:29:23.126285  335638 cache.go:87] Successfully saved all images to host disk.
	I1124 09:29:23.264663  335638 oci.go:107] Successfully prepared a docker volume newest-cni-639420
	I1124 09:29:23.264743  335638 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1124 09:29:23.264839  335638 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:29:23.264876  335638 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:29:23.264920  335638 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:29:23.340405  335638 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-639420 --name newest-cni-639420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-639420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-639420 --network newest-cni-639420 --ip 192.168.103.2 --volume newest-cni-639420:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:29:23.676315  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Running}}
	I1124 09:29:23.696133  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:29:23.716195  335638 cli_runner.go:164] Run: docker exec newest-cni-639420 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:29:23.767472  335638 oci.go:144] the created container "newest-cni-639420" has a running status.
	I1124 09:29:23.767502  335638 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa...
	I1124 09:29:23.880299  335638 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:29:23.913685  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:29:23.938407  335638 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:29:23.938430  335638 kic_runner.go:114] Args: [docker exec --privileged newest-cni-639420 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:29:23.995674  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:29:24.019807  335638 machine.go:94] provisionDockerMachine start ...
	I1124 09:29:24.019886  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:24.039934  335638 main.go:143] libmachine: Using SSH client type: native
	I1124 09:29:24.040236  335638 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1124 09:29:24.040257  335638 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:29:24.194642  335638 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-639420
	
	I1124 09:29:24.194671  335638 ubuntu.go:182] provisioning hostname "newest-cni-639420"
	I1124 09:29:24.194731  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:24.214977  335638 main.go:143] libmachine: Using SSH client type: native
	I1124 09:29:24.215249  335638 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1124 09:29:24.215272  335638 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-639420 && echo "newest-cni-639420" | sudo tee /etc/hostname
	I1124 09:29:24.375311  335638 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-639420
	
	I1124 09:29:24.375402  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:24.396206  335638 main.go:143] libmachine: Using SSH client type: native
	I1124 09:29:24.396483  335638 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1124 09:29:24.396508  335638 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-639420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-639420/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-639420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:29:24.542631  335638 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:29:24.542659  335638 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 09:29:24.542699  335638 ubuntu.go:190] setting up certificates
	I1124 09:29:24.542712  335638 provision.go:84] configureAuth start
	I1124 09:29:24.542757  335638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-639420
	I1124 09:29:24.563950  335638 provision.go:143] copyHostCerts
	I1124 09:29:24.564035  335638 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem, removing ...
	I1124 09:29:24.564060  335638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem
	I1124 09:29:24.564140  335638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 09:29:24.564260  335638 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem, removing ...
	I1124 09:29:24.564274  335638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem
	I1124 09:29:24.564314  335638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 09:29:24.564451  335638 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem, removing ...
	I1124 09:29:24.564466  335638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem
	I1124 09:29:24.564508  335638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 09:29:24.564572  335638 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.newest-cni-639420 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-639420]
	I1124 09:29:24.634070  335638 provision.go:177] copyRemoteCerts
	I1124 09:29:24.634118  335638 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:29:24.634148  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:24.654625  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:24.757462  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:29:24.776220  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:29:24.794520  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:29:24.813093  335638 provision.go:87] duration metric: took 270.366164ms to configureAuth
	I1124 09:29:24.813121  335638 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:29:24.813292  335638 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:29:24.813401  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:24.834251  335638 main.go:143] libmachine: Using SSH client type: native
	I1124 09:29:24.834536  335638 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1124 09:29:24.834557  335638 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:29:25.126491  335638 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:29:25.126511  335638 machine.go:97] duration metric: took 1.106687482s to provisionDockerMachine
	I1124 09:29:25.126521  335638 client.go:176] duration metric: took 2.519971934s to LocalClient.Create
	I1124 09:29:25.126544  335638 start.go:167] duration metric: took 2.520045206s to libmachine.API.Create "newest-cni-639420"
	I1124 09:29:25.126559  335638 start.go:293] postStartSetup for "newest-cni-639420" (driver="docker")
	I1124 09:29:25.126572  335638 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:29:25.126630  335638 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:29:25.126678  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:25.145747  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:25.250031  335638 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:29:25.253397  335638 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:29:25.253419  335638 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:29:25.253429  335638 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:29:25.253477  335638 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:29:25.253562  335638 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:29:25.253646  335638 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:29:25.261375  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:29:25.281316  335638 start.go:296] duration metric: took 154.739012ms for postStartSetup
	I1124 09:29:25.281675  335638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-639420
	I1124 09:29:25.300835  335638 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/config.json ...
	I1124 09:29:25.301127  335638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:29:25.301177  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:25.321665  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:25.432025  335638 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:29:25.437384  335638 start.go:128] duration metric: took 2.834085276s to createHost
	I1124 09:29:25.437412  335638 start.go:83] releasing machines lock for "newest-cni-639420", held for 2.834231966s
	I1124 09:29:25.437518  335638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-639420
	I1124 09:29:25.463404  335638 ssh_runner.go:195] Run: cat /version.json
	I1124 09:29:25.463463  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:25.463541  335638 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:29:25.463627  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:25.489226  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:25.491764  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:25.602110  335638 ssh_runner.go:195] Run: systemctl --version
	I1124 09:29:25.688534  335638 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:29:25.738028  335638 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:29:25.744117  335638 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:29:25.744186  335638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:29:25.776213  335638 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:29:25.776234  335638 start.go:496] detecting cgroup driver to use...
	I1124 09:29:25.776266  335638 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:29:25.776315  335638 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:29:25.800921  335638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:29:25.816518  335638 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:29:25.816573  335638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:29:25.835996  335638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:29:25.859536  335638 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:29:25.956711  335638 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:29:26.048642  335638 docker.go:234] disabling docker service ...
	I1124 09:29:26.048701  335638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:29:26.070200  335638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:29:26.082794  335638 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:29:26.161849  335638 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:29:26.244252  335638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:29:26.256768  335638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:29:26.270526  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:26.413542  335638 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:29:26.413610  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.424485  335638 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:29:26.424548  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.433347  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.442022  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.450535  335638 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:29:26.458406  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.466709  335638 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.485185  335638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:29:26.494936  335638 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:29:26.503451  335638 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:29:26.512446  335638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:26.607745  335638 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:29:26.740004  335638 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:29:26.740072  335638 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:29:26.744918  335638 start.go:564] Will wait 60s for crictl version
	I1124 09:29:26.744980  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:26.749213  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:29:26.778225  335638 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:29:26.778341  335638 ssh_runner.go:195] Run: crio --version
	I1124 09:29:26.817193  335638 ssh_runner.go:195] Run: crio --version
	I1124 09:29:26.857658  335638 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:29:25.621005  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:29:25.621027  333403 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:29:25.621082  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:25.650236  333403 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:29:25.650263  333403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:29:25.650325  333403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:29:25.656443  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:25.656558  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:25.680533  333403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:29:25.754168  333403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:29:25.768201  333403 node_ready.go:35] waiting up to 6m0s for node "no-preload-938348" to be "Ready" ...
	I1124 09:29:25.780161  333403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:29:25.785322  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:29:25.785364  333403 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:29:25.802823  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:29:25.802851  333403 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:29:25.803558  333403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:29:25.820621  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:29:25.820645  333403 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:29:25.835245  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:29:25.835268  333403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:29:25.851624  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:29:25.851655  333403 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:29:25.868485  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:29:25.868512  333403 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:29:25.881781  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:29:25.881805  333403 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:29:25.898837  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:29:25.898859  333403 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:29:25.913417  333403 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:29:25.913444  333403 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:29:25.929159  333403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:29:26.859114  335638 cli_runner.go:164] Run: docker network inspect newest-cni-639420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:29:26.881311  335638 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:29:26.885555  335638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:29:26.898515  335638 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 09:29:26.899623  335638 kubeadm.go:884] updating cluster {Name:newest-cni-639420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:29:26.899846  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:27.062935  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:27.248815  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:27.420412  333403 node_ready.go:49] node "no-preload-938348" is "Ready"
	I1124 09:29:27.420445  333403 node_ready.go:38] duration metric: took 1.652208535s for node "no-preload-938348" to be "Ready" ...
	I1124 09:29:27.420461  333403 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:29:27.420510  333403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:29:28.112081  333403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.331875566s)
	I1124 09:29:28.112133  333403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.30847927s)
	I1124 09:29:28.112267  333403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.183075802s)
	I1124 09:29:28.112327  333403 api_server.go:72] duration metric: took 2.533463434s to wait for apiserver process to appear ...
	I1124 09:29:28.112396  333403 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:29:28.112418  333403 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:29:28.113923  333403 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-938348 addons enable metrics-server
	
	I1124 09:29:28.118193  333403 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:29:28.118221  333403 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:29:28.122505  333403 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1124 09:29:24.638988  326387 node_ready.go:57] node "default-k8s-diff-port-164377" has "Ready":"False" status (will retry)
	W1124 09:29:27.138848  326387 node_ready.go:57] node "default-k8s-diff-port-164377" has "Ready":"False" status (will retry)
	W1124 09:29:25.726359  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:28.229199  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	I1124 09:29:28.123704  333403 addons.go:530] duration metric: took 2.544816591s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 09:29:28.613397  333403 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:29:28.619304  333403 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:29:28.619375  333403 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:29:29.112960  333403 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:29:29.118507  333403 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 09:29:29.119708  333403 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:29:29.119736  333403 api_server.go:131] duration metric: took 1.00733118s to wait for apiserver health ...
	I1124 09:29:29.119749  333403 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:29:29.123578  333403 system_pods.go:59] 8 kube-system pods found
	I1124 09:29:29.123627  333403 system_pods.go:61] "coredns-7d764666f9-ll2c4" [9f976359-8745-4fe5-8cc4-df9cafaca113] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:29.123647  333403 system_pods.go:61] "etcd-no-preload-938348" [f64c1f91-d65d-483d-9702-da61053fc34e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:29:29.123658  333403 system_pods.go:61] "kindnet-zrnnf" [ade02f32-ef6b-4bca-b2da-3a67433a796c] Running
	I1124 09:29:29.123669  333403 system_pods.go:61] "kube-apiserver-no-preload-938348" [dc59fbc6-9b29-4422-826c-c65c23e5767b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:29:29.123682  333403 system_pods.go:61] "kube-controller-manager-no-preload-938348" [70a934f6-cdab-441e-b04f-cae5940dc0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:29:29.123689  333403 system_pods.go:61] "kube-proxy-smqgp" [045fb194-89ac-48bb-a9af-24c93032274f] Running
	I1124 09:29:29.123698  333403 system_pods.go:61] "kube-scheduler-no-preload-938348" [5799f86f-5b8f-4492-9a26-d7a3749ae301] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:29:29.123704  333403 system_pods.go:61] "storage-provisioner" [701c213c-777c-488b-972b-2c1c4ad85d6a] Running
	I1124 09:29:29.123711  333403 system_pods.go:74] duration metric: took 3.95574ms to wait for pod list to return data ...
	I1124 09:29:29.123727  333403 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:29:29.126930  333403 default_sa.go:45] found service account: "default"
	I1124 09:29:29.126946  333403 default_sa.go:55] duration metric: took 3.214232ms for default service account to be created ...
	I1124 09:29:29.126954  333403 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:29:29.129597  333403 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:29.129621  333403 system_pods.go:89] "coredns-7d764666f9-ll2c4" [9f976359-8745-4fe5-8cc4-df9cafaca113] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:29.129629  333403 system_pods.go:89] "etcd-no-preload-938348" [f64c1f91-d65d-483d-9702-da61053fc34e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:29:29.129637  333403 system_pods.go:89] "kindnet-zrnnf" [ade02f32-ef6b-4bca-b2da-3a67433a796c] Running
	I1124 09:29:29.129646  333403 system_pods.go:89] "kube-apiserver-no-preload-938348" [dc59fbc6-9b29-4422-826c-c65c23e5767b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:29:29.129659  333403 system_pods.go:89] "kube-controller-manager-no-preload-938348" [70a934f6-cdab-441e-b04f-cae5940dc0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:29:29.129670  333403 system_pods.go:89] "kube-proxy-smqgp" [045fb194-89ac-48bb-a9af-24c93032274f] Running
	I1124 09:29:29.129680  333403 system_pods.go:89] "kube-scheduler-no-preload-938348" [5799f86f-5b8f-4492-9a26-d7a3749ae301] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:29:29.129688  333403 system_pods.go:89] "storage-provisioner" [701c213c-777c-488b-972b-2c1c4ad85d6a] Running
	I1124 09:29:29.129696  333403 system_pods.go:126] duration metric: took 2.736234ms to wait for k8s-apps to be running ...
	I1124 09:29:29.129709  333403 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:29:29.129761  333403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:29:29.144790  333403 system_svc.go:56] duration metric: took 15.076412ms WaitForService to wait for kubelet
	I1124 09:29:29.144816  333403 kubeadm.go:587] duration metric: took 3.565955333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:29:29.144836  333403 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:29:29.148142  333403 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:29:29.148166  333403 node_conditions.go:123] node cpu capacity is 8
	I1124 09:29:29.148179  333403 node_conditions.go:105] duration metric: took 3.338764ms to run NodePressure ...
	I1124 09:29:29.148190  333403 start.go:242] waiting for startup goroutines ...
	I1124 09:29:29.148197  333403 start.go:247] waiting for cluster config update ...
	I1124 09:29:29.148207  333403 start.go:256] writing updated cluster config ...
	I1124 09:29:29.148587  333403 ssh_runner.go:195] Run: rm -f paused
	I1124 09:29:29.153407  333403 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:29:29.157433  333403 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ll2c4" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:29:31.163742  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	I1124 09:29:27.419562  335638 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:29:27.419629  335638 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:29:27.460918  335638 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 09:29:27.460945  335638 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 09:29:27.461011  335638 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:27.461369  335638 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.461449  335638 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 09:29:27.461464  335638 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.461655  335638 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.461712  335638 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.461854  335638 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.461897  335638 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.464257  335638 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.464411  335638 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 09:29:27.464628  335638 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.464689  335638 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.464988  335638 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.465073  335638 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.465265  335638 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:27.467072  335638 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.627554  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.628494  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.656948  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1124 09:29:27.659842  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.660130  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.661481  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.669123  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.675953  335638 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1124 09:29:27.675989  335638 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.676029  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.676133  335638 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1124 09:29:27.676154  335638 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.676189  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.718910  335638 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 09:29:27.718963  335638 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 09:29:27.719008  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.722396  335638 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1124 09:29:27.722439  335638 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.722502  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.773905  335638 cache_images.go:118] "registry.k8s.io/etcd:3.5.24-0" needs transfer: "registry.k8s.io/etcd:3.5.24-0" does not exist at hash "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d" in container runtime
	I1124 09:29:27.773953  335638 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.773959  335638 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1124 09:29:27.773992  335638 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.774000  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.774024  335638 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1124 09:29:27.774032  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.774053  335638 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.774087  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:27.774164  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.774169  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.774222  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:29:27.774246  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.820103  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.820178  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.820194  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.820205  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.820254  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.859804  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.859896  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:29:27.869005  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.869109  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:29:27.869193  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.870157  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.873591  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:29:27.914164  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:29:27.914280  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:29:27.935841  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:29:27.940089  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:29:27.940094  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:29:27.940233  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 09:29:27.940316  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:29:27.940653  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 09:29:27.940724  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:29:27.970433  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 09:29:27.970782  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 09:29:27.982144  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 09:29:27.982159  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 09:29:27.982326  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:29:27.982393  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:29:27.988403  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0
	I1124 09:29:27.988475  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 09:29:27.988506  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:29:27.988569  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:29:27.988584  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1124 09:29:27.988601  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1124 09:29:27.988570  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1124 09:29:27.988633  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1124 09:29:27.988663  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 09:29:27.988684  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 09:29:27.989467  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1124 09:29:27.989489  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1124 09:29:27.989530  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1124 09:29:27.989547  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1124 09:29:28.003399  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1124 09:29:28.003439  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1124 09:29:28.003507  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.24-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.24-0': No such file or directory
	I1124 09:29:28.003519  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 --> /var/lib/minikube/images/etcd_3.5.24-0 (23728640 bytes)
	I1124 09:29:28.075946  335638 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 09:29:28.076025  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1124 09:29:28.435196  335638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:28.467390  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 09:29:28.467433  335638 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:29:28.467481  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:29:28.544058  335638 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 09:29:28.544104  335638 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:28.544158  335638 ssh_runner.go:195] Run: which crictl
	I1124 09:29:29.658664  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.191161688s)
	I1124 09:29:29.658696  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 09:29:29.658705  335638 ssh_runner.go:235] Completed: which crictl: (1.114530607s)
	I1124 09:29:29.658722  335638 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:29:29.658761  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:29:29.658764  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:30.831023  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.172237149s)
	I1124 09:29:30.831058  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 09:29:30.831097  335638 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:29:30.831104  335638 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.172303496s)
	I1124 09:29:30.831157  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:30.831161  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:29:30.857684  335638 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:32.198551  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.367365601s)
	I1124 09:29:32.198579  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 09:29:32.198601  335638 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:29:32.198644  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:29:32.198644  335638 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.3409304s)
	I1124 09:29:32.198688  335638 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 09:29:32.198817  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1124 09:29:29.138937  326387 node_ready.go:57] node "default-k8s-diff-port-164377" has "Ready":"False" status (will retry)
	W1124 09:29:31.140211  326387 node_ready.go:57] node "default-k8s-diff-port-164377" has "Ready":"False" status (will retry)
	W1124 09:29:33.639508  326387 node_ready.go:57] node "default-k8s-diff-port-164377" has "Ready":"False" status (will retry)
	W1124 09:29:30.724219  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:33.226759  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:33.166158  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	W1124 09:29:35.169759  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	I1124 09:29:34.139080  326387 node_ready.go:49] node "default-k8s-diff-port-164377" is "Ready"
	I1124 09:29:34.139124  326387 node_ready.go:38] duration metric: took 11.50378844s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:29:34.139140  326387 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:29:34.139192  326387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:29:34.158537  326387 api_server.go:72] duration metric: took 11.922691746s to wait for apiserver process to appear ...
	I1124 09:29:34.158566  326387 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:29:34.158588  326387 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:29:34.167073  326387 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 09:29:34.168208  326387 api_server.go:141] control plane version: v1.34.2
	I1124 09:29:34.168234  326387 api_server.go:131] duration metric: took 9.659516ms to wait for apiserver health ...
	I1124 09:29:34.168244  326387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:29:34.172677  326387 system_pods.go:59] 8 kube-system pods found
	I1124 09:29:34.172739  326387 system_pods.go:61] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:34.172755  326387 system_pods.go:61] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running
	I1124 09:29:34.172764  326387 system_pods.go:61] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:29:34.172771  326387 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running
	I1124 09:29:34.172776  326387 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running
	I1124 09:29:34.172787  326387 system_pods.go:61] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:29:34.172800  326387 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running
	I1124 09:29:34.172821  326387 system_pods.go:61] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:34.172834  326387 system_pods.go:74] duration metric: took 4.582003ms to wait for pod list to return data ...
	I1124 09:29:34.172844  326387 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:29:34.175534  326387 default_sa.go:45] found service account: "default"
	I1124 09:29:34.175550  326387 default_sa.go:55] duration metric: took 2.700612ms for default service account to be created ...
	I1124 09:29:34.175561  326387 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:29:34.179181  326387 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:34.179211  326387 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:34.179219  326387 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running
	I1124 09:29:34.179226  326387 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:29:34.179232  326387 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running
	I1124 09:29:34.179237  326387 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running
	I1124 09:29:34.179242  326387 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:29:34.179247  326387 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running
	I1124 09:29:34.179254  326387 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:34.179275  326387 retry.go:31] will retry after 297.148701ms: missing components: kube-dns
	I1124 09:29:34.488122  326387 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:34.488167  326387 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:34.488175  326387 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running
	I1124 09:29:34.488184  326387 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:29:34.488190  326387 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running
	I1124 09:29:34.488196  326387 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running
	I1124 09:29:34.488203  326387 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:29:34.488208  326387 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running
	I1124 09:29:34.488215  326387 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:34.488232  326387 retry.go:31] will retry after 287.470129ms: missing components: kube-dns
	I1124 09:29:34.781622  326387 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:34.781657  326387 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:29:34.781666  326387 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running
	I1124 09:29:34.781674  326387 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:29:34.781680  326387 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running
	I1124 09:29:34.781685  326387 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running
	I1124 09:29:34.781690  326387 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:29:34.781698  326387 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running
	I1124 09:29:34.781712  326387 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:29:34.781731  326387 retry.go:31] will retry after 468.737219ms: missing components: kube-dns
	I1124 09:29:35.258570  326387 system_pods.go:86] 8 kube-system pods found
	I1124 09:29:35.258605  326387 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Running
	I1124 09:29:35.258613  326387 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running
	I1124 09:29:35.258619  326387 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:29:35.258625  326387 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running
	I1124 09:29:35.258631  326387 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running
	I1124 09:29:35.258635  326387 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:29:35.258641  326387 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running
	I1124 09:29:35.258645  326387 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Running
	I1124 09:29:35.258654  326387 system_pods.go:126] duration metric: took 1.083086655s to wait for k8s-apps to be running ...
	I1124 09:29:35.258663  326387 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:29:35.258711  326387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:29:35.277987  326387 system_svc.go:56] duration metric: took 19.296897ms WaitForService to wait for kubelet
	I1124 09:29:35.278195  326387 kubeadm.go:587] duration metric: took 13.0423584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:29:35.278239  326387 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:29:35.282999  326387 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:29:35.283084  326387 node_conditions.go:123] node cpu capacity is 8
	I1124 09:29:35.283119  326387 node_conditions.go:105] duration metric: took 4.851131ms to run NodePressure ...
	I1124 09:29:35.283157  326387 start.go:242] waiting for startup goroutines ...
	I1124 09:29:35.283168  326387 start.go:247] waiting for cluster config update ...
	I1124 09:29:35.283183  326387 start.go:256] writing updated cluster config ...
	I1124 09:29:35.283487  326387 ssh_runner.go:195] Run: rm -f paused
	I1124 09:29:35.288615  326387 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:29:35.293905  326387 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.302326  326387 pod_ready.go:94] pod "coredns-66bc5c9577-gn9zx" is "Ready"
	I1124 09:29:35.302365  326387 pod_ready.go:86] duration metric: took 8.436531ms for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.305415  326387 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.312327  326387 pod_ready.go:94] pod "etcd-default-k8s-diff-port-164377" is "Ready"
	I1124 09:29:35.312364  326387 pod_ready.go:86] duration metric: took 6.924592ms for pod "etcd-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.314862  326387 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.318955  326387 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-164377" is "Ready"
	I1124 09:29:35.318978  326387 pod_ready.go:86] duration metric: took 4.093677ms for pod "kube-apiserver-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.321394  326387 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.692851  326387 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-164377" is "Ready"
	I1124 09:29:35.692882  326387 pod_ready.go:86] duration metric: took 371.463188ms for pod "kube-controller-manager-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:35.897066  326387 pod_ready.go:83] waiting for pod "kube-proxy-2vm2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:36.294824  326387 pod_ready.go:94] pod "kube-proxy-2vm2s" is "Ready"
	I1124 09:29:36.294849  326387 pod_ready.go:86] duration metric: took 397.755705ms for pod "kube-proxy-2vm2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:36.497305  326387 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:36.894816  326387 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-164377" is "Ready"
	I1124 09:29:36.894843  326387 pod_ready.go:86] duration metric: took 397.509298ms for pod "kube-scheduler-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:36.894960  326387 pod_ready.go:40] duration metric: took 1.606200462s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:29:36.961071  326387 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:29:36.967753  326387 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-164377" cluster and "default" namespace by default
	I1124 09:29:34.419549  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.24-0: (2.220878777s)
	I1124 09:29:34.419586  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 from cache
	I1124 09:29:34.419606  335638 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:29:34.419660  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:29:34.419673  335638 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.220841723s)
	I1124 09:29:34.419694  335638 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 09:29:34.419716  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 09:29:36.150703  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.731017207s)
	I1124 09:29:36.150736  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 09:29:36.150757  335638 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:29:36.150802  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	W1124 09:29:35.731259  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:38.224668  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:37.665170  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	W1124 09:29:40.164504  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	I1124 09:29:38.528548  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (2.377720965s)
	I1124 09:29:38.528580  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 09:29:38.528605  335638 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:29:38.528676  335638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:29:39.961131  335638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.432426877s)
	I1124 09:29:39.961160  335638 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 09:29:39.961186  335638 cache_images.go:125] Successfully loaded all cached images
	I1124 09:29:39.961193  335638 cache_images.go:94] duration metric: took 12.500231207s to LoadCachedImages
	I1124 09:29:39.961203  335638 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1124 09:29:39.961290  335638 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-639420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:29:39.961405  335638 ssh_runner.go:195] Run: crio config
	I1124 09:29:40.017556  335638 cni.go:84] Creating CNI manager for ""
	I1124 09:29:40.017574  335638 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:29:40.017588  335638 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 09:29:40.017611  335638 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-639420 NodeName:newest-cni-639420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:29:40.017725  335638 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-639420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:29:40.017780  335638 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:29:40.026708  335638 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1124 09:29:40.026770  335638 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:29:40.035553  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1124 09:29:40.035588  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:29:40.035605  335638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:29:40.035555  335638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1124 09:29:40.035670  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1124 09:29:40.035752  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1124 09:29:40.041028  335638 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1124 09:29:40.041055  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1124 09:29:40.054908  335638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1124 09:29:40.054907  335638 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1124 09:29:40.054990  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1124 09:29:40.071621  335638 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1124 09:29:40.071656  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1124 09:29:40.519484  335638 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:29:40.527951  335638 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1124 09:29:40.541490  335638 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:29:40.557833  335638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1124 09:29:40.571237  335638 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:29:40.576306  335638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:29:40.589883  335638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:40.678126  335638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:29:40.704903  335638 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420 for IP: 192.168.103.2
	I1124 09:29:40.704941  335638 certs.go:195] generating shared ca certs ...
	I1124 09:29:40.704959  335638 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.705124  335638 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:29:40.705180  335638 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:29:40.705192  335638 certs.go:257] generating profile certs ...
	I1124 09:29:40.705287  335638 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.key
	I1124 09:29:40.705309  335638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.crt with IP's: []
	I1124 09:29:40.775997  335638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.crt ...
	I1124 09:29:40.776027  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.crt: {Name:mkcf58d60ab21e3774368023568c4a98b624e7d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.776190  335638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.key ...
	I1124 09:29:40.776201  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.key: {Name:mka249965908f6ad2a4645fcec87590859e3d741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.776282  335638 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key.145b87e5
	I1124 09:29:40.776296  335638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt.145b87e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 09:29:40.890654  335638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt.145b87e5 ...
	I1124 09:29:40.890679  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt.145b87e5: {Name:mk2d89ff9289520c269c4447c1a2481a90ae6b20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.890829  335638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key.145b87e5 ...
	I1124 09:29:40.890844  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key.145b87e5: {Name:mkd105e7354007cf88b1a316f5e37bcbc13961b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.890931  335638 certs.go:382] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt.145b87e5 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt
	I1124 09:29:40.891002  335638 certs.go:386] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key.145b87e5 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key
	I1124 09:29:40.891063  335638 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.key
	I1124 09:29:40.891079  335638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.crt with IP's: []
	I1124 09:29:40.946849  335638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.crt ...
	I1124 09:29:40.946874  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.crt: {Name:mk36d387ce57fdba3e54ffc9476c2588e55a96b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.947020  335638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.key ...
	I1124 09:29:40.947040  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.key: {Name:mkff8f18af606cfd446ad97e56c96ba9f13e37da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:40.947218  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:29:40.947290  335638 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:29:40.947305  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:29:40.947343  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:29:40.947375  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:29:40.947398  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:29:40.947439  335638 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:29:40.948103  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:29:40.967593  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:29:40.985798  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:29:41.005882  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:29:41.024614  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:29:41.042622  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:29:41.060169  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:29:41.077427  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:29:41.095055  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:29:41.115317  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:29:41.133179  335638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:29:41.150670  335638 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:29:41.163944  335638 ssh_runner.go:195] Run: openssl version
	I1124 09:29:41.170403  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:29:41.178881  335638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:41.182872  335638 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:41.182924  335638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:29:41.218574  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:29:41.228668  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:29:41.237539  335638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:29:41.241785  335638 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:29:41.241843  335638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:29:41.279712  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:29:41.289216  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:29:41.298273  335638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:29:41.303190  335638 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:29:41.303272  335638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:29:41.339223  335638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:29:41.349396  335638 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:29:41.353636  335638 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:29:41.353707  335638 kubeadm.go:401] StartCluster: {Name:newest-cni-639420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:29:41.353787  335638 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:29:41.353835  335638 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:29:41.385066  335638 cri.go:89] found id: ""
	I1124 09:29:41.385132  335638 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:29:41.394546  335638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:29:41.403899  335638 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:29:41.403955  335638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:29:41.412726  335638 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:29:41.412781  335638 kubeadm.go:158] found existing configuration files:
	
	I1124 09:29:41.412846  335638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:29:41.421489  335638 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:29:41.421553  335638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:29:41.429748  335638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:29:41.438109  335638 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:29:41.438163  335638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:29:41.445795  335638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:29:41.453942  335638 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:29:41.453999  335638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:29:41.462783  335638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:29:41.471339  335638 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:29:41.471392  335638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:29:41.479973  335638 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:29:41.593364  335638 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:29:41.656913  335638 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 09:29:40.224803  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:42.724388  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:42.663938  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	W1124 09:29:45.163258  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	W1124 09:29:45.223867  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	W1124 09:29:47.223911  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	I1124 09:29:49.669175  335638 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:29:49.669272  335638 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:29:49.669451  335638 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:29:49.669541  335638 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:29:49.669606  335638 kubeadm.go:319] OS: Linux
	I1124 09:29:49.669691  335638 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:29:49.669773  335638 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:29:49.669873  335638 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:29:49.669934  335638 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:29:49.670009  335638 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:29:49.670097  335638 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:29:49.670172  335638 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:29:49.670239  335638 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:29:49.670393  335638 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:29:49.670546  335638 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:29:49.670668  335638 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:29:49.670729  335638 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:29:49.671906  335638 out.go:252]   - Generating certificates and keys ...
	I1124 09:29:49.672007  335638 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:29:49.672114  335638 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:29:49.672217  335638 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:29:49.672307  335638 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:29:49.672414  335638 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:29:49.672465  335638 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:29:49.672543  335638 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:29:49.672675  335638 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-639420] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 09:29:49.672731  335638 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:29:49.672823  335638 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-639420] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 09:29:49.672883  335638 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:29:49.672943  335638 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:29:49.673004  335638 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:29:49.673057  335638 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:29:49.673117  335638 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:29:49.673170  335638 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:29:49.673213  335638 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:29:49.673271  335638 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:29:49.673361  335638 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:29:49.673433  335638 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:29:49.673488  335638 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:29:49.674747  335638 out.go:252]   - Booting up control plane ...
	I1124 09:29:49.674820  335638 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:29:49.674896  335638 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:29:49.674982  335638 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:29:49.675127  335638 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:29:49.675248  335638 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:29:49.675417  335638 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:29:49.675530  335638 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:29:49.675584  335638 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:29:49.675760  335638 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:29:49.675908  335638 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:29:49.676012  335638 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.803251ms
	I1124 09:29:49.676144  335638 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:29:49.676260  335638 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1124 09:29:49.676372  335638 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:29:49.676457  335638 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:29:49.676528  335638 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 509.98897ms
	I1124 09:29:49.676590  335638 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.697436813s
	I1124 09:29:49.676653  335638 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501787232s
	I1124 09:29:49.676756  335638 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:29:49.676892  335638 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:29:49.676964  335638 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:29:49.677164  335638 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-639420 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:29:49.677248  335638 kubeadm.go:319] [bootstrap-token] Using token: a6gobe.avdvdblcqznwc247
	I1124 09:29:49.678527  335638 out.go:252]   - Configuring RBAC rules ...
	I1124 09:29:49.678633  335638 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:29:49.678728  335638 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:29:49.678888  335638 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:29:49.679032  335638 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:29:49.679186  335638 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:29:49.679273  335638 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:29:49.679412  335638 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:29:49.679486  335638 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:29:49.679559  335638 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:29:49.679568  335638 kubeadm.go:319] 
	I1124 09:29:49.679656  335638 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:29:49.679666  335638 kubeadm.go:319] 
	I1124 09:29:49.679764  335638 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:29:49.679786  335638 kubeadm.go:319] 
	I1124 09:29:49.679826  335638 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:29:49.679900  335638 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:29:49.679946  335638 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:29:49.679955  335638 kubeadm.go:319] 
	I1124 09:29:49.680007  335638 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:29:49.680013  335638 kubeadm.go:319] 
	I1124 09:29:49.680073  335638 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:29:49.680082  335638 kubeadm.go:319] 
	I1124 09:29:49.680146  335638 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:29:49.680225  335638 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:29:49.680297  335638 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:29:49.680303  335638 kubeadm.go:319] 
	I1124 09:29:49.680431  335638 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:29:49.680537  335638 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:29:49.680544  335638 kubeadm.go:319] 
	I1124 09:29:49.680634  335638 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a6gobe.avdvdblcqznwc247 \
	I1124 09:29:49.680784  335638 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 \
	I1124 09:29:49.680808  335638 kubeadm.go:319] 	--control-plane 
	I1124 09:29:49.680819  335638 kubeadm.go:319] 
	I1124 09:29:49.680956  335638 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:29:49.680963  335638 kubeadm.go:319] 
	I1124 09:29:49.681063  335638 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a6gobe.avdvdblcqznwc247 \
	I1124 09:29:49.681231  335638 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 
	I1124 09:29:49.681253  335638 cni.go:84] Creating CNI manager for ""
	I1124 09:29:49.681263  335638 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:29:49.683273  335638 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 09:29:47.163708  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	W1124 09:29:49.662699  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	W1124 09:29:51.663766  333403 pod_ready.go:104] pod "coredns-7d764666f9-ll2c4" is not "Ready", error: <nil>
	I1124 09:29:49.684325  335638 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:29:49.688867  335638 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1124 09:29:49.688885  335638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:29:49.702229  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:29:49.905666  335638 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:29:49.905727  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:29:49.905767  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-639420 minikube.k8s.io/updated_at=2025_11_24T09_29_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=newest-cni-639420 minikube.k8s.io/primary=true
	I1124 09:29:49.989760  335638 ops.go:34] apiserver oom_adj: -16
	I1124 09:29:49.989887  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:29:50.490663  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:29:50.990897  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:29:51.490580  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:29:51.990406  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1124 09:29:49.725142  330481 pod_ready.go:104] pod "coredns-5dd5756b68-gmgwv" is not "Ready", error: <nil>
	I1124 09:29:51.723677  330481 pod_ready.go:94] pod "coredns-5dd5756b68-gmgwv" is "Ready"
	I1124 09:29:51.723701  330481 pod_ready.go:86] duration metric: took 30.505621947s for pod "coredns-5dd5756b68-gmgwv" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:51.727754  330481 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:51.731482  330481 pod_ready.go:94] pod "etcd-old-k8s-version-767267" is "Ready"
	I1124 09:29:51.731501  330481 pod_ready.go:86] duration metric: took 3.729799ms for pod "etcd-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:51.733774  330481 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:51.737401  330481 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-767267" is "Ready"
	I1124 09:29:51.737417  330481 pod_ready.go:86] duration metric: took 3.62386ms for pod "kube-apiserver-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:51.739625  330481 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:51.922076  330481 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-767267" is "Ready"
	I1124 09:29:51.922106  330481 pod_ready.go:86] duration metric: took 182.464372ms for pod "kube-controller-manager-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:52.123065  330481 pod_ready.go:83] waiting for pod "kube-proxy-b8kgc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:52.521474  330481 pod_ready.go:94] pod "kube-proxy-b8kgc" is "Ready"
	I1124 09:29:52.521507  330481 pod_ready.go:86] duration metric: took 398.418913ms for pod "kube-proxy-b8kgc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:52.722564  330481 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:53.122143  330481 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-767267" is "Ready"
	I1124 09:29:53.122167  330481 pod_ready.go:86] duration metric: took 399.576003ms for pod "kube-scheduler-old-k8s-version-767267" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:29:53.122178  330481 pod_ready.go:40] duration metric: took 31.909901894s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:29:53.177059  330481 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 09:29:53.178421  330481 out.go:203] 
	W1124 09:29:53.179500  330481 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 09:29:53.180730  330481 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:29:53.181977  330481 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-767267" cluster and "default" namespace by default
	I1124 09:29:52.490624  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:29:52.990812  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:29:53.490603  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:29:53.990395  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:29:54.490546  335638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:29:54.566713  335638 kubeadm.go:1114] duration metric: took 4.661046175s to wait for elevateKubeSystemPrivileges
	I1124 09:29:54.566752  335638 kubeadm.go:403] duration metric: took 13.213050956s to StartCluster
	I1124 09:29:54.566773  335638 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:54.566845  335638 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:29:54.568284  335638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:29:54.568540  335638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:29:54.568574  335638 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:29:54.568630  335638 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:29:54.568728  335638 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-639420"
	I1124 09:29:54.568746  335638 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-639420"
	I1124 09:29:54.568756  335638 addons.go:70] Setting default-storageclass=true in profile "newest-cni-639420"
	I1124 09:29:54.568775  335638 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-639420"
	I1124 09:29:54.568777  335638 host.go:66] Checking if "newest-cni-639420" exists ...
	I1124 09:29:54.568846  335638 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:29:54.569154  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:29:54.569327  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:29:54.570001  335638 out.go:179] * Verifying Kubernetes components...
	I1124 09:29:54.571302  335638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:29:54.593535  335638 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:29:54.594295  335638 addons.go:239] Setting addon default-storageclass=true in "newest-cni-639420"
	I1124 09:29:54.594343  335638 host.go:66] Checking if "newest-cni-639420" exists ...
	I1124 09:29:54.594646  335638 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:29:54.594747  335638 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:29:54.594762  335638 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:29:54.594800  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:54.626250  335638 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:29:54.626282  335638 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:29:54.626407  335638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:29:54.634954  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:54.652300  335638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:29:54.668998  335638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:29:54.710031  335638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:29:54.751913  335638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:29:54.772142  335638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:29:54.845662  335638 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 09:29:54.847435  335638 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:29:54.847493  335638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:29:55.088871  335638 api_server.go:72] duration metric: took 520.258044ms to wait for apiserver process to appear ...
	I1124 09:29:55.088899  335638 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:29:55.088920  335638 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:29:55.095149  335638 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 09:29:55.096368  335638 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:29:55.096393  335638 api_server.go:131] duration metric: took 7.486307ms to wait for apiserver health ...
	I1124 09:29:55.096403  335638 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:29:55.101044  335638 system_pods.go:59] 8 kube-system pods found
	I1124 09:29:55.101078  335638 system_pods.go:61] "coredns-7d764666f9-nt7fv" [00c5787a-4637-43cd-9afb-89764455d459] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 09:29:55.101086  335638 system_pods.go:61] "etcd-newest-cni-639420" [88147c4d-9cf5-44d6-a652-2049ed50b037] Running
	I1124 09:29:55.101096  335638 system_pods.go:61] "kindnet-ttw2l" [bcb40a0d-89d2-4c4a-b089-1ffe6ac2ce85] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:29:55.101108  335638 system_pods.go:61] "kube-apiserver-newest-cni-639420" [3df71d2c-f364-4355-b195-23505ac0361d] Running
	I1124 09:29:55.101119  335638 system_pods.go:61] "kube-controller-manager-newest-cni-639420" [6ba83968-a1d9-4eb7-9229-9f343f480f0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:29:55.101125  335638 system_pods.go:61] "kube-proxy-p6g59" [732ff47b-0bb4-48c6-bd56-743340884576] Running
	I1124 09:29:55.101133  335638 system_pods.go:61] "kube-scheduler-newest-cni-639420" [1ce9861f-a76c-4666-b111-78cc89effb8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:29:55.101141  335638 system_pods.go:61] "storage-provisioner" [5fca1a17-f28e-4ec3-8d5a-563796fb9109] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 09:29:55.101147  335638 system_pods.go:74] duration metric: took 4.738885ms to wait for pod list to return data ...
	I1124 09:29:55.101156  335638 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:29:55.101315  335638 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:29:55.103120  335638 addons.go:530] duration metric: took 534.490918ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:29:55.103973  335638 default_sa.go:45] found service account: "default"
	I1124 09:29:55.103993  335638 default_sa.go:55] duration metric: took 2.829882ms for default service account to be created ...
	I1124 09:29:55.104006  335638 kubeadm.go:587] duration metric: took 535.396699ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 09:29:55.104026  335638 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:29:55.106569  335638 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:29:55.106593  335638 node_conditions.go:123] node cpu capacity is 8
	I1124 09:29:55.106608  335638 node_conditions.go:105] duration metric: took 2.576914ms to run NodePressure ...
	I1124 09:29:55.106620  335638 start.go:242] waiting for startup goroutines ...
	I1124 09:29:55.351418  335638 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-639420" context rescaled to 1 replicas
	I1124 09:29:55.351462  335638 start.go:247] waiting for cluster config update ...
	I1124 09:29:55.351477  335638 start.go:256] writing updated cluster config ...
	I1124 09:29:55.351763  335638 ssh_runner.go:195] Run: rm -f paused
	I1124 09:29:55.423386  335638 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1124 09:29:55.424983  335638 out.go:179] * Done! kubectl is now configured to use "newest-cni-639420" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.760500555Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1d3929c8-8b7b-4f15-ae19-a0717e5f5ff0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.761160791Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.762146625Z" level=info msg="Ran pod sandbox e403fa4543aed82435d681fab51d95a13749c78b04e970df5a70916bfc9b9e23 with infra container: kube-system/kindnet-ttw2l/POD" id=270d7d89-7d2c-4f76-89b0-b0d319f49502 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.763799632Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=054ddf75-47f9-49f0-9936-6aee965ad0b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.763937357Z" level=info msg="Image docker.io/kindest/kindnetd:v20250512-df8de77b not found" id=054ddf75-47f9-49f0-9936-6aee965ad0b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.763985113Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20250512-df8de77b found" id=054ddf75-47f9-49f0-9936-6aee965ad0b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.766320982Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f2fae282-4eab-4d28-bb8a-deea9996e445 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.766983557Z" level=info msg="Creating container: kube-system/kube-proxy-p6g59/kube-proxy" id=0f952d3a-100e-4a34-82d5-cc5933a27f92 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.767118348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.768067999Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250512-df8de77b\""
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.771907922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.77263399Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.810306799Z" level=info msg="Created container 7e6184e64326e8a20c1452ae13e6b9385577b992556d0a84b5e42f09a001a45d: kube-system/kube-proxy-p6g59/kube-proxy" id=0f952d3a-100e-4a34-82d5-cc5933a27f92 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.811098006Z" level=info msg="Starting container: 7e6184e64326e8a20c1452ae13e6b9385577b992556d0a84b5e42f09a001a45d" id=54e9b2cf-d3ae-4f91-a790-334a6ab16237 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:54 newest-cni-639420 crio[769]: time="2025-11-24T09:29:54.814112807Z" level=info msg="Started container" PID=2835 containerID=7e6184e64326e8a20c1452ae13e6b9385577b992556d0a84b5e42f09a001a45d description=kube-system/kube-proxy-p6g59/kube-proxy id=54e9b2cf-d3ae-4f91-a790-334a6ab16237 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f57c15ad9adaccc6ffb31a3f6498d27515bd258719981bbbf9b6d2224864476
	Nov 24 09:29:56 newest-cni-639420 crio[769]: time="2025-11-24T09:29:56.101010517Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11" id=f2fae282-4eab-4d28-bb8a-deea9996e445 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:29:56 newest-cni-639420 crio[769]: time="2025-11-24T09:29:56.101746326Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1d723d37-8830-4a3a-bb35-0a57b393ff90 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:56 newest-cni-639420 crio[769]: time="2025-11-24T09:29:56.103855615Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=138528dc-e76e-4e10-aa4e-7b256a3dfddb name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:56 newest-cni-639420 crio[769]: time="2025-11-24T09:29:56.107912356Z" level=info msg="Creating container: kube-system/kindnet-ttw2l/kindnet-cni" id=8ae87808-8c9c-4ae3-83f9-a1b8c4b71050 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:56 newest-cni-639420 crio[769]: time="2025-11-24T09:29:56.108025729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:56 newest-cni-639420 crio[769]: time="2025-11-24T09:29:56.113057368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:56 newest-cni-639420 crio[769]: time="2025-11-24T09:29:56.11365955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:56 newest-cni-639420 crio[769]: time="2025-11-24T09:29:56.136957538Z" level=info msg="Created container b6c8a41575cd98d4e5fa4690c1794118f4cc16cca7139f9fd4004f376859b9d8: kube-system/kindnet-ttw2l/kindnet-cni" id=8ae87808-8c9c-4ae3-83f9-a1b8c4b71050 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:56 newest-cni-639420 crio[769]: time="2025-11-24T09:29:56.137688758Z" level=info msg="Starting container: b6c8a41575cd98d4e5fa4690c1794118f4cc16cca7139f9fd4004f376859b9d8" id=8959af36-573e-4e0b-acc7-2e3ee9e8071e name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:56 newest-cni-639420 crio[769]: time="2025-11-24T09:29:56.139641845Z" level=info msg="Started container" PID=3106 containerID=b6c8a41575cd98d4e5fa4690c1794118f4cc16cca7139f9fd4004f376859b9d8 description=kube-system/kindnet-ttw2l/kindnet-cni id=8959af36-573e-4e0b-acc7-2e3ee9e8071e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e403fa4543aed82435d681fab51d95a13749c78b04e970df5a70916bfc9b9e23
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b6c8a41575cd9       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11   Less than a second ago   Running             kindnet-cni               0                   e403fa4543aed       kindnet-ttw2l                               kube-system
	7e6184e64326e       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                     1 second ago             Running             kube-proxy                0                   2f57c15ad9ada       kube-proxy-p6g59                            kube-system
	178c6c0695e5b       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                     11 seconds ago           Running             kube-apiserver            0                   1f44d00a01c53       kube-apiserver-newest-cni-639420            kube-system
	212a12394ecb7       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                     11 seconds ago           Running             kube-controller-manager   0                   8b8b7b7a7c32a       kube-controller-manager-newest-cni-639420   kube-system
	d18bded96dd0c       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                     11 seconds ago           Running             kube-scheduler            0                   657dc57a0a879       kube-scheduler-newest-cni-639420            kube-system
	fbf2bd94fcb0a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     11 seconds ago           Running             etcd                      0                   e7d26b7c16663       etcd-newest-cni-639420                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-639420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-639420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=newest-cni-639420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_29_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:29:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-639420
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:29:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:29:49 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:29:49 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:29:49 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 09:29:49 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-639420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c98ff8f9-f47f-426e-a902-762092513ece
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-639420                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9s
	  kube-system                 kindnet-ttw2l                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-639420             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-639420    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-p6g59                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-639420             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-639420 event: Registered Node newest-cni-639420 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [fbf2bd94fcb0a92d7925854c42ca543eb46cdbd79bc105fdb7f73e949657589d] <==
	{"level":"warn","ts":"2025-11-24T09:29:45.747969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.756906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.763918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.771170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.781934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.786652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.796077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.804824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.812745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.825582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.832682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.839888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.847812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.855185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.862620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.869545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.876140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.882507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.890310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.896675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.915041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.921797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.928216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.934629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:45.983625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41480","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:29:56 up  1:12,  0 user,  load average: 4.06, 3.36, 2.20
	Linux newest-cni-639420 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b6c8a41575cd98d4e5fa4690c1794118f4cc16cca7139f9fd4004f376859b9d8] <==
	I1124 09:29:56.327521       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:29:56.327825       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 09:29:56.327979       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:29:56.327998       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:29:56.328025       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:29:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:29:56.531855       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:29:56.531956       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:29:56.531975       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:29:56.532177       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:29:56.932477       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:29:56.932507       1 metrics.go:72] Registering metrics
	I1124 09:29:56.932582       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [178c6c0695e5bba49a3c1cead2c2ae5283019e5c40c2661dc20ceb61ccddab8a] <==
	I1124 09:29:46.503840       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:46.503856       1 policy_source.go:248] refreshing policies
	E1124 09:29:46.536837       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1124 09:29:46.585146       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:29:46.604347       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1124 09:29:46.604375       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:29:46.610399       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:29:46.693595       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:29:47.387578       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1124 09:29:47.391770       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:29:47.391791       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1124 09:29:47.875373       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:29:47.911669       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:29:47.992494       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:29:47.998518       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 09:29:47.999648       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:29:48.003725       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:29:48.422538       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:29:49.070397       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:29:49.079669       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:29:49.086580       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:29:54.274916       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:29:54.376979       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:29:54.380787       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:29:54.424384       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [212a12394ecb7da6aff14f77ed59a3df67f41256527e26836c90046286305c62] <==
	I1124 09:29:53.228898       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.228947       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.229077       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.229105       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.228596       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.228463       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.229174       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.228884       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.228568       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.229245       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1124 09:29:53.228781       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.229301       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-639420"
	I1124 09:29:53.229459       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1124 09:29:53.230074       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.230102       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.230106       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.230184       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.230364       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.234025       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.236274       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:29:53.236825       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-639420" podCIDRs=["10.42.0.0/24"]
	I1124 09:29:53.329227       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:53.329265       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 09:29:53.329272       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1124 09:29:53.337452       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [7e6184e64326e8a20c1452ae13e6b9385577b992556d0a84b5e42f09a001a45d] <==
	I1124 09:29:54.866400       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:29:54.941843       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:29:55.043658       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:55.043705       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 09:29:55.043834       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:29:55.073406       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:29:55.073497       1 server_linux.go:136] "Using iptables Proxier"
	I1124 09:29:55.081634       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:29:55.084434       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 09:29:55.084698       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:29:55.087133       1 config.go:200] "Starting service config controller"
	I1124 09:29:55.087254       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:29:55.087229       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:29:55.087327       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:29:55.087902       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:29:55.089111       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:29:55.088989       1 config.go:309] "Starting node config controller"
	I1124 09:29:55.089197       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:29:55.089222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:29:55.188231       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:29:55.188364       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:29:55.189283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d18bded96dd0c0b713cd3632203ee6f4e4d5e68054fb13b0c0f668a2e162310e] <==
	E1124 09:29:46.453409       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1124 09:29:46.453578       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1124 09:29:46.453726       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1124 09:29:46.453421       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1124 09:29:46.453593       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1124 09:29:46.453542       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1124 09:29:46.453693       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1124 09:29:46.453569       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1124 09:29:46.454174       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1124 09:29:46.454284       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1124 09:29:47.389863       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1124 09:29:47.390981       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1124 09:29:47.451148       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1124 09:29:47.452314       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1124 09:29:47.518623       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1124 09:29:47.519543       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1124 09:29:47.531795       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1124 09:29:47.532691       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1124 09:29:47.548291       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1124 09:29:47.549367       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1124 09:29:47.666888       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1124 09:29:47.667866       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1124 09:29:47.737614       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1124 09:29:47.738769       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1124 09:29:50.347983       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 09:29:49 newest-cni-639420 kubelet[2541]: E1124 09:29:49.922713    2541 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-639420\" already exists" pod="kube-system/kube-apiserver-newest-cni-639420"
	Nov 24 09:29:49 newest-cni-639420 kubelet[2541]: E1124 09:29:49.922794    2541 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-639420" containerName="kube-apiserver"
	Nov 24 09:29:49 newest-cni-639420 kubelet[2541]: I1124 09:29:49.948941    2541 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-639420" podStartSLOduration=2.948926037 podStartE2EDuration="2.948926037s" podCreationTimestamp="2025-11-24 09:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:49.939409812 +0000 UTC m=+1.121151912" watchObservedRunningTime="2025-11-24 09:29:49.948926037 +0000 UTC m=+1.130668137"
	Nov 24 09:29:49 newest-cni-639420 kubelet[2541]: I1124 09:29:49.949087    2541 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-639420" podStartSLOduration=0.949079508 podStartE2EDuration="949.079508ms" podCreationTimestamp="2025-11-24 09:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:49.949021357 +0000 UTC m=+1.130763457" watchObservedRunningTime="2025-11-24 09:29:49.949079508 +0000 UTC m=+1.130821605"
	Nov 24 09:29:49 newest-cni-639420 kubelet[2541]: I1124 09:29:49.959154    2541 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-639420" podStartSLOduration=0.959139949 podStartE2EDuration="959.139949ms" podCreationTimestamp="2025-11-24 09:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:49.959073217 +0000 UTC m=+1.140815318" watchObservedRunningTime="2025-11-24 09:29:49.959139949 +0000 UTC m=+1.140882051"
	Nov 24 09:29:49 newest-cni-639420 kubelet[2541]: I1124 09:29:49.970567    2541 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-639420" podStartSLOduration=0.970553161 podStartE2EDuration="970.553161ms" podCreationTimestamp="2025-11-24 09:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:49.970039639 +0000 UTC m=+1.151781740" watchObservedRunningTime="2025-11-24 09:29:49.970553161 +0000 UTC m=+1.152295261"
	Nov 24 09:29:50 newest-cni-639420 kubelet[2541]: E1124 09:29:50.916447    2541 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-639420" containerName="kube-scheduler"
	Nov 24 09:29:50 newest-cni-639420 kubelet[2541]: E1124 09:29:50.916620    2541 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-639420" containerName="etcd"
	Nov 24 09:29:50 newest-cni-639420 kubelet[2541]: E1124 09:29:50.916730    2541 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-639420" containerName="kube-apiserver"
	Nov 24 09:29:51 newest-cni-639420 kubelet[2541]: E1124 09:29:51.918305    2541 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-639420" containerName="kube-apiserver"
	Nov 24 09:29:51 newest-cni-639420 kubelet[2541]: E1124 09:29:51.918471    2541 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-639420" containerName="kube-scheduler"
	Nov 24 09:29:53 newest-cni-639420 kubelet[2541]: I1124 09:29:53.311210    2541 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 09:29:53 newest-cni-639420 kubelet[2541]: I1124 09:29:53.311870    2541 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 09:29:54 newest-cni-639420 kubelet[2541]: I1124 09:29:54.520321    2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/732ff47b-0bb4-48c6-bd56-743340884576-kube-proxy\") pod \"kube-proxy-p6g59\" (UID: \"732ff47b-0bb4-48c6-bd56-743340884576\") " pod="kube-system/kube-proxy-p6g59"
	Nov 24 09:29:54 newest-cni-639420 kubelet[2541]: I1124 09:29:54.520395    2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/732ff47b-0bb4-48c6-bd56-743340884576-xtables-lock\") pod \"kube-proxy-p6g59\" (UID: \"732ff47b-0bb4-48c6-bd56-743340884576\") " pod="kube-system/kube-proxy-p6g59"
	Nov 24 09:29:54 newest-cni-639420 kubelet[2541]: I1124 09:29:54.520428    2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcb40a0d-89d2-4c4a-b089-1ffe6ac2ce85-xtables-lock\") pod \"kindnet-ttw2l\" (UID: \"bcb40a0d-89d2-4c4a-b089-1ffe6ac2ce85\") " pod="kube-system/kindnet-ttw2l"
	Nov 24 09:29:54 newest-cni-639420 kubelet[2541]: I1124 09:29:54.520450    2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm4vs\" (UniqueName: \"kubernetes.io/projected/bcb40a0d-89d2-4c4a-b089-1ffe6ac2ce85-kube-api-access-wm4vs\") pod \"kindnet-ttw2l\" (UID: \"bcb40a0d-89d2-4c4a-b089-1ffe6ac2ce85\") " pod="kube-system/kindnet-ttw2l"
	Nov 24 09:29:54 newest-cni-639420 kubelet[2541]: I1124 09:29:54.520486    2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/732ff47b-0bb4-48c6-bd56-743340884576-lib-modules\") pod \"kube-proxy-p6g59\" (UID: \"732ff47b-0bb4-48c6-bd56-743340884576\") " pod="kube-system/kube-proxy-p6g59"
	Nov 24 09:29:54 newest-cni-639420 kubelet[2541]: I1124 09:29:54.520506    2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsrhm\" (UniqueName: \"kubernetes.io/projected/732ff47b-0bb4-48c6-bd56-743340884576-kube-api-access-jsrhm\") pod \"kube-proxy-p6g59\" (UID: \"732ff47b-0bb4-48c6-bd56-743340884576\") " pod="kube-system/kube-proxy-p6g59"
	Nov 24 09:29:54 newest-cni-639420 kubelet[2541]: I1124 09:29:54.520683    2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bcb40a0d-89d2-4c4a-b089-1ffe6ac2ce85-cni-cfg\") pod \"kindnet-ttw2l\" (UID: \"bcb40a0d-89d2-4c4a-b089-1ffe6ac2ce85\") " pod="kube-system/kindnet-ttw2l"
	Nov 24 09:29:54 newest-cni-639420 kubelet[2541]: I1124 09:29:54.520747    2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcb40a0d-89d2-4c4a-b089-1ffe6ac2ce85-lib-modules\") pod \"kindnet-ttw2l\" (UID: \"bcb40a0d-89d2-4c4a-b089-1ffe6ac2ce85\") " pod="kube-system/kindnet-ttw2l"
	Nov 24 09:29:54 newest-cni-639420 kubelet[2541]: E1124 09:29:54.996648    2541 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-639420" containerName="kube-controller-manager"
	Nov 24 09:29:55 newest-cni-639420 kubelet[2541]: I1124 09:29:55.010906    2541 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-p6g59" podStartSLOduration=1.010883301 podStartE2EDuration="1.010883301s" podCreationTimestamp="2025-11-24 09:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:29:54.940536018 +0000 UTC m=+6.122278118" watchObservedRunningTime="2025-11-24 09:29:55.010883301 +0000 UTC m=+6.192625404"
	Nov 24 09:29:55 newest-cni-639420 kubelet[2541]: E1124 09:29:55.143878    2541 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-639420" containerName="kube-apiserver"
	Nov 24 09:29:56 newest-cni-639420 kubelet[2541]: I1124 09:29:56.944937    2541 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-ttw2l" podStartSLOduration=1.6077711049999999 podStartE2EDuration="2.944915703s" podCreationTimestamp="2025-11-24 09:29:54 +0000 UTC" firstStartedPulling="2025-11-24 09:29:54.765768925 +0000 UTC m=+5.947511017" lastFinishedPulling="2025-11-24 09:29:56.102913536 +0000 UTC m=+7.284655615" observedRunningTime="2025-11-24 09:29:56.944645094 +0000 UTC m=+8.126387208" watchObservedRunningTime="2025-11-24 09:29:56.944915703 +0000 UTC m=+8.126657804"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-639420 -n newest-cni-639420
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-639420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-nt7fv storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-639420 describe pod coredns-7d764666f9-nt7fv storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-639420 describe pod coredns-7d764666f9-nt7fv storage-provisioner: exit status 1 (55.794919ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-nt7fv" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-639420 describe pod coredns-7d764666f9-nt7fv storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-767267 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-767267 --alsologtostderr -v=1: exit status 80 (1.833776335s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-767267 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:30:04.921623  345652 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:04.921877  345652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:04.921886  345652 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:04.921891  345652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:04.922130  345652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:04.922426  345652 out.go:368] Setting JSON to false
	I1124 09:30:04.922444  345652 mustload.go:66] Loading cluster: old-k8s-version-767267
	I1124 09:30:04.922831  345652 config.go:182] Loaded profile config "old-k8s-version-767267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 09:30:04.923248  345652 cli_runner.go:164] Run: docker container inspect old-k8s-version-767267 --format={{.State.Status}}
	I1124 09:30:04.943531  345652 host.go:66] Checking if "old-k8s-version-767267" exists ...
	I1124 09:30:04.943916  345652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:05.005545  345652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:89 SystemTime:2025-11-24 09:30:04.994057124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:05.006191  345652 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-767267 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 09:30:05.008309  345652 out.go:179] * Pausing node old-k8s-version-767267 ... 
	I1124 09:30:05.009452  345652 host.go:66] Checking if "old-k8s-version-767267" exists ...
	I1124 09:30:05.009678  345652 ssh_runner.go:195] Run: systemctl --version
	I1124 09:30:05.009710  345652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-767267
	I1124 09:30:05.028169  345652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/old-k8s-version-767267/id_rsa Username:docker}
	I1124 09:30:05.128992  345652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:05.142146  345652 pause.go:52] kubelet running: true
	I1124 09:30:05.142212  345652 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:30:05.317324  345652 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:30:05.317440  345652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:30:05.383096  345652 cri.go:89] found id: "66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017"
	I1124 09:30:05.383122  345652 cri.go:89] found id: "6b40170103002c57e488dd5ac0b91b8f6f8b44bfab54d31beec69f718e520ca1"
	I1124 09:30:05.383128  345652 cri.go:89] found id: "110b0f1e92e3b1d9592bcc10ac6ba1b1ffa82c44f520bca311539c7415f55584"
	I1124 09:30:05.383134  345652 cri.go:89] found id: "c81d8b42b91ba1e59e1e5ef89bc4ef1d3ca5e91535e688ab4c2256422b06c771"
	I1124 09:30:05.383138  345652 cri.go:89] found id: "5279dc42578ddbdd88a86d520878aa8c2388ac69fa692cff0d5c39910b815079"
	I1124 09:30:05.383143  345652 cri.go:89] found id: "d7f9989bef9cdec8bcf2a9dc31466db9a4b0ee30c0360721775fc7e2491ff2b2"
	I1124 09:30:05.383148  345652 cri.go:89] found id: "957db9e46cf07c1058ffd4395c982714d8f71f43483a3d024a3aed61ac25b6da"
	I1124 09:30:05.383152  345652 cri.go:89] found id: "d8a882f20879e8f43298374296d4ea577c4a71dd2a327551055374134f9728dc"
	I1124 09:30:05.383156  345652 cri.go:89] found id: "f2432e65f01bff8ce99f46c9bef8a0c8d04e2a92a461bc81ed22a74e42f65cb1"
	I1124 09:30:05.383169  345652 cri.go:89] found id: "d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593"
	I1124 09:30:05.383178  345652 cri.go:89] found id: "c1607c95af442b8f122f80ce26959c6a88568b9ef8983223eaf0f1c71d0f3da6"
	I1124 09:30:05.383182  345652 cri.go:89] found id: ""
	I1124 09:30:05.383227  345652 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:30:05.396096  345652 retry.go:31] will retry after 196.926752ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:05Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:05.593532  345652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:05.606574  345652 pause.go:52] kubelet running: false
	I1124 09:30:05.606634  345652 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:30:05.754565  345652 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:30:05.754659  345652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:30:05.832696  345652 cri.go:89] found id: "66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017"
	I1124 09:30:05.832718  345652 cri.go:89] found id: "6b40170103002c57e488dd5ac0b91b8f6f8b44bfab54d31beec69f718e520ca1"
	I1124 09:30:05.832723  345652 cri.go:89] found id: "110b0f1e92e3b1d9592bcc10ac6ba1b1ffa82c44f520bca311539c7415f55584"
	I1124 09:30:05.832728  345652 cri.go:89] found id: "c81d8b42b91ba1e59e1e5ef89bc4ef1d3ca5e91535e688ab4c2256422b06c771"
	I1124 09:30:05.832732  345652 cri.go:89] found id: "5279dc42578ddbdd88a86d520878aa8c2388ac69fa692cff0d5c39910b815079"
	I1124 09:30:05.832738  345652 cri.go:89] found id: "d7f9989bef9cdec8bcf2a9dc31466db9a4b0ee30c0360721775fc7e2491ff2b2"
	I1124 09:30:05.832742  345652 cri.go:89] found id: "957db9e46cf07c1058ffd4395c982714d8f71f43483a3d024a3aed61ac25b6da"
	I1124 09:30:05.832747  345652 cri.go:89] found id: "d8a882f20879e8f43298374296d4ea577c4a71dd2a327551055374134f9728dc"
	I1124 09:30:05.832751  345652 cri.go:89] found id: "f2432e65f01bff8ce99f46c9bef8a0c8d04e2a92a461bc81ed22a74e42f65cb1"
	I1124 09:30:05.832765  345652 cri.go:89] found id: "d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593"
	I1124 09:30:05.832773  345652 cri.go:89] found id: "c1607c95af442b8f122f80ce26959c6a88568b9ef8983223eaf0f1c71d0f3da6"
	I1124 09:30:05.832779  345652 cri.go:89] found id: ""
	I1124 09:30:05.832824  345652 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:30:05.845073  345652 retry.go:31] will retry after 546.375822ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:05Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:06.391625  345652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:06.405119  345652 pause.go:52] kubelet running: false
	I1124 09:30:06.405167  345652 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:30:06.579895  345652 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:30:06.579966  345652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:30:06.667065  345652 cri.go:89] found id: "66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017"
	I1124 09:30:06.667092  345652 cri.go:89] found id: "6b40170103002c57e488dd5ac0b91b8f6f8b44bfab54d31beec69f718e520ca1"
	I1124 09:30:06.667098  345652 cri.go:89] found id: "110b0f1e92e3b1d9592bcc10ac6ba1b1ffa82c44f520bca311539c7415f55584"
	I1124 09:30:06.667104  345652 cri.go:89] found id: "c81d8b42b91ba1e59e1e5ef89bc4ef1d3ca5e91535e688ab4c2256422b06c771"
	I1124 09:30:06.667109  345652 cri.go:89] found id: "5279dc42578ddbdd88a86d520878aa8c2388ac69fa692cff0d5c39910b815079"
	I1124 09:30:06.667114  345652 cri.go:89] found id: "d7f9989bef9cdec8bcf2a9dc31466db9a4b0ee30c0360721775fc7e2491ff2b2"
	I1124 09:30:06.667119  345652 cri.go:89] found id: "957db9e46cf07c1058ffd4395c982714d8f71f43483a3d024a3aed61ac25b6da"
	I1124 09:30:06.667123  345652 cri.go:89] found id: "d8a882f20879e8f43298374296d4ea577c4a71dd2a327551055374134f9728dc"
	I1124 09:30:06.667128  345652 cri.go:89] found id: "f2432e65f01bff8ce99f46c9bef8a0c8d04e2a92a461bc81ed22a74e42f65cb1"
	I1124 09:30:06.667144  345652 cri.go:89] found id: "d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593"
	I1124 09:30:06.667152  345652 cri.go:89] found id: "c1607c95af442b8f122f80ce26959c6a88568b9ef8983223eaf0f1c71d0f3da6"
	I1124 09:30:06.667157  345652 cri.go:89] found id: ""
	I1124 09:30:06.667213  345652 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:30:06.684251  345652 out.go:203] 
	W1124 09:30:06.685503  345652 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:30:06.685522  345652 out.go:285] * 
	* 
	W1124 09:30:06.689669  345652 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:30:06.690873  345652 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-767267 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-767267
helpers_test.go:243: (dbg) docker inspect old-k8s-version-767267:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558",
	        "Created": "2025-11-24T09:27:59.477215384Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 330690,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:29:09.801452132Z",
	            "FinishedAt": "2025-11-24T09:29:08.869079826Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/hostname",
	        "HostsPath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/hosts",
	        "LogPath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558-json.log",
	        "Name": "/old-k8s-version-767267",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-767267:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-767267",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558",
	                "LowerDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-767267",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-767267/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-767267",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-767267",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-767267",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8b952731a880517011162eca6cbab45544899b359c1ed5711fd0c21a59f3d9a1",
	            "SandboxKey": "/var/run/docker/netns/8b952731a880",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-767267": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49a891848d14199803dd04f544287d94ca351d74be411134145450566451080b",
	                    "EndpointID": "cbc015a94f8eea4db6cf39cb2a2bf7bd2ec35605296cf437d6f7ded17b3f666a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:58:55:85:9a:02",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-767267",
	                        "b2fbca5819e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-767267 -n old-k8s-version-767267
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-767267 -n old-k8s-version-767267: exit status 2 (374.313601ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-767267 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-767267 logs -n 25: (1.177137867s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-949664 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo crio config                                                                                                                                                                                                                    │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ delete  │ -p bridge-949664                                                                                                                                                                                                                                     │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ stop    │ -p old-k8s-version-767267 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p no-preload-938348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ stop    │ -p no-preload-938348 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-767267 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p no-preload-938348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p kubernetes-upgrade-967467                                                                                                                                                                                                                         │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-164377 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p newest-cni-639420 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-639420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:30:06
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:30:06.455406  346330 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:06.455795  346330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:06.455815  346330 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:06.455822  346330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:06.456125  346330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:06.456847  346330 out.go:368] Setting JSON to false
	I1124 09:30:06.458780  346330 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4352,"bootTime":1763972254,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:30:06.458858  346330 start.go:143] virtualization: kvm guest
	I1124 09:30:06.460773  346330 out.go:179] * [default-k8s-diff-port-164377] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:30:06.461935  346330 notify.go:221] Checking for updates...
	I1124 09:30:06.461943  346330 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:30:06.463463  346330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:30:06.464783  346330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:06.466213  346330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:30:06.467544  346330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:30:06.468816  346330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:30:06.470709  346330 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:06.471583  346330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:30:06.501679  346330 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:30:06.501765  346330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:06.565053  346330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:06.554042895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:06.565207  346330 docker.go:319] overlay module found
	I1124 09:30:06.574530  346330 out.go:179] * Using the docker driver based on existing profile
	I1124 09:30:06.575969  346330 start.go:309] selected driver: docker
	I1124 09:30:06.575991  346330 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:06.576130  346330 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:30:06.577045  346330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:06.647032  346330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:06.636324087 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:06.647448  346330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:06.647496  346330 cni.go:84] Creating CNI manager for ""
	I1124 09:30:06.647589  346330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:06.647657  346330 start.go:353] cluster config:
	{Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:06.650533  346330 out.go:179] * Starting "default-k8s-diff-port-164377" primary control-plane node in "default-k8s-diff-port-164377" cluster
	I1124 09:30:06.651739  346330 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:30:06.653211  346330 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:30:06.654363  346330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:06.654400  346330 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:30:06.654410  346330 cache.go:65] Caching tarball of preloaded images
	I1124 09:30:06.654487  346330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:30:06.654511  346330 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:30:06.654523  346330 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:30:06.654642  346330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/config.json ...
	I1124 09:30:06.679071  346330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:30:06.679111  346330 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:30:06.679134  346330 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:30:06.679171  346330 start.go:360] acquireMachinesLock for default-k8s-diff-port-164377: {Name:mkd718f87c8feaecdc5abdde6ac9abecef458b31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:30:06.679247  346330 start.go:364] duration metric: took 41.913µs to acquireMachinesLock for "default-k8s-diff-port-164377"
	I1124 09:30:06.679271  346330 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:30:06.679283  346330 fix.go:54] fixHost starting: 
	I1124 09:30:06.679552  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:06.700873  346330 fix.go:112] recreateIfNeeded on default-k8s-diff-port-164377: state=Stopped err=<nil>
	W1124 09:30:06.700907  346330 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:30:05.577240  344762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:30:05.577593  344762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:30:05.730192  344762 ssh_runner.go:195] Run: systemctl --version
	I1124 09:30:05.737102  344762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:30:05.777974  344762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:30:05.783787  344762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:30:05.783922  344762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:30:05.793653  344762 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:30:05.793678  344762 start.go:496] detecting cgroup driver to use...
	I1124 09:30:05.793712  344762 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:30:05.793760  344762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:30:05.811401  344762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:30:05.826362  344762 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:30:05.826431  344762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:30:05.842914  344762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:30:05.857052  344762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:30:05.940892  344762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:30:06.021822  344762 docker.go:234] disabling docker service ...
	I1124 09:30:06.021882  344762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:30:06.037630  344762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:30:06.051983  344762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:30:06.166624  344762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:30:06.261959  344762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:30:06.275797  344762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:30:06.290451  344762 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:06.456737  344762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:30:06.456807  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.467493  344762 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:30:06.467549  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.479222  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.489422  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.500256  344762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:30:06.509775  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.519592  344762 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.532045  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.544442  344762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:30:06.553889  344762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:30:06.562531  344762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:06.662552  344762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:30:06.816154  344762 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:30:06.816221  344762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:30:06.821909  344762 start.go:564] Will wait 60s for crictl version
	I1124 09:30:06.821961  344762 ssh_runner.go:195] Run: which crictl
	I1124 09:30:06.826969  344762 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:30:06.863985  344762 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:30:06.864134  344762 ssh_runner.go:195] Run: crio --version
	I1124 09:30:06.898441  344762 ssh_runner.go:195] Run: crio --version
	I1124 09:30:06.934510  344762 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:30:06.935829  344762 cli_runner.go:164] Run: docker network inspect newest-cni-639420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:06.955142  344762 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:06.959523  344762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:06.972271  344762 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Nov 24 09:29:40 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:40.623298341Z" level=info msg="Started container" PID=1738 containerID=2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper id=243a45c7-bcdc-4395-99c0-757ba44e5d9e name=/runtime.v1.RuntimeService/StartContainer sandboxID=74bf9e1325f1e1e0a270d4e60e47d58a48216492644f7e246ab2e0ae5c2e9b16
	Nov 24 09:29:41 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:41.579273361Z" level=info msg="Removing container: cc0bf9bb08c0830e0dcb4d066e8ece53c9cb4a82cc50215f9de192309b128863" id=d709fa38-2dde-4aa1-937a-9d58190acc14 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:41 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:41.635121801Z" level=info msg="Removed container cc0bf9bb08c0830e0dcb4d066e8ece53c9cb4a82cc50215f9de192309b128863: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper" id=d709fa38-2dde-4aa1-937a-9d58190acc14 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.604281165Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f5ff9aec-9aa1-4b7a-8c62-1af04a8e50b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.605108497Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=22a4bca8-7217-47c1-836d-92609b442f7a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.606037156Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a2ccfec8-1979-44a2-a776-bf3236c67c23 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.606155902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.611737212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.611868457Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/60a581367fab2423814aad44feee2891ed572c16465331ed1624b54cb00bac22/merged/etc/passwd: no such file or directory"
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.611891721Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/60a581367fab2423814aad44feee2891ed572c16465331ed1624b54cb00bac22/merged/etc/group: no such file or directory"
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.612151161Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.647221086Z" level=info msg="Created container 66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017: kube-system/storage-provisioner/storage-provisioner" id=a2ccfec8-1979-44a2-a776-bf3236c67c23 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.647800795Z" level=info msg="Starting container: 66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017" id=b2f1e3b5-747b-437e-91f7-af64c3a2554a name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.649429839Z" level=info msg="Started container" PID=1752 containerID=66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017 description=kube-system/storage-provisioner/storage-provisioner id=b2f1e3b5-747b-437e-91f7-af64c3a2554a name=/runtime.v1.RuntimeService/StartContainer sandboxID=215528199b5cb12333a234ce47e0f98e1d6b967783b352f41089674626ab7236
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.470831135Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b8470324-fb24-4ae6-89ae-29aac724d08c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.471822087Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1c3e1586-c897-4a69-a285-d1543151c368 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.472837595Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper" id=cee4d5af-70d6-4ff1-9aa6-837dfac54c95 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.472963476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.478952398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.479410611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.507828164Z" level=info msg="Created container d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper" id=cee4d5af-70d6-4ff1-9aa6-837dfac54c95 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.508479272Z" level=info msg="Starting container: d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593" id=01b696b3-b418-4441-9016-5f35e06934e7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.510487271Z" level=info msg="Started container" PID=1787 containerID=d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper id=01b696b3-b418-4441-9016-5f35e06934e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=74bf9e1325f1e1e0a270d4e60e47d58a48216492644f7e246ab2e0ae5c2e9b16
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.62639706Z" level=info msg="Removing container: 2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331" id=48fdc600-fe34-4113-8057-af435c24902f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.639121406Z" level=info msg="Removed container 2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper" id=48fdc600-fe34-4113-8057-af435c24902f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	d0f8be0ae8c61       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   74bf9e1325f1e       dashboard-metrics-scraper-5f989dc9cf-b4gsn       kubernetes-dashboard
	66c06981b7cda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           16 seconds ago      Running             storage-provisioner         1                   215528199b5cb       storage-provisioner                              kube-system
	c1607c95af442       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   30 seconds ago      Running             kubernetes-dashboard        0                   018f5126a36db       kubernetes-dashboard-8694d4445c-4mz29            kubernetes-dashboard
	fa632ef36d7da       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           46 seconds ago      Running             busybox                     1                   42f4d2ebf8485       busybox                                          default
	6b40170103002       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           47 seconds ago      Running             coredns                     0                   790677be60237       coredns-5dd5756b68-gmgwv                         kube-system
	110b0f1e92e3b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   96021fa6494ed       kindnet-8tdrm                                    kube-system
	c81d8b42b91ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   215528199b5cb       storage-provisioner                              kube-system
	5279dc42578dd       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           47 seconds ago      Running             kube-proxy                  0                   5c947f538b610       kube-proxy-b8kgc                                 kube-system
	d7f9989bef9cd       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           50 seconds ago      Running             kube-controller-manager     0                   63422659353e5       kube-controller-manager-old-k8s-version-767267   kube-system
	957db9e46cf07       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           50 seconds ago      Running             etcd                        0                   96679a945021e       etcd-old-k8s-version-767267                      kube-system
	d8a882f20879e       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           50 seconds ago      Running             kube-scheduler              0                   1510f6cb4f751       kube-scheduler-old-k8s-version-767267            kube-system
	f2432e65f01bf       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           50 seconds ago      Running             kube-apiserver              0                   cd7df9a0659a2       kube-apiserver-old-k8s-version-767267            kube-system
	
	
	==> coredns [6b40170103002c57e488dd5ac0b91b8f6f8b44bfab54d31beec69f718e520ca1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59685 - 28119 "HINFO IN 935536319946958349.8752353859356753209. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.015574588s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-767267
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-767267
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=old-k8s-version-767267
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_28_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:28:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-767267
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:30:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:29:50 +0000   Mon, 24 Nov 2025 09:28:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:29:50 +0000   Mon, 24 Nov 2025 09:28:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:29:50 +0000   Mon, 24 Nov 2025 09:28:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:29:50 +0000   Mon, 24 Nov 2025 09:28:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-767267
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                f1cd3fa8-d2f0-4c2f-8873-1620b1eea27a
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-gmgwv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-old-k8s-version-767267                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-8tdrm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-767267             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-767267    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-b8kgc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-767267             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-b4gsn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4mz29             0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 47s                  kube-proxy       
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node old-k8s-version-767267 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x8 over 2m1s)  kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node old-k8s-version-767267 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node old-k8s-version-767267 event: Registered Node old-k8s-version-767267 in Controller
	  Normal  NodeReady                90s                  kubelet          Node old-k8s-version-767267 status is now: NodeReady
	  Normal  Starting                 51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)    kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)    kubelet          Node old-k8s-version-767267 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 51s)    kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           35s                  node-controller  Node old-k8s-version-767267 event: Registered Node old-k8s-version-767267 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [957db9e46cf07c1058ffd4395c982714d8f71f43483a3d024a3aed61ac25b6da] <==
	{"level":"info","ts":"2025-11-24T09:29:17.13805Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T09:29:17.138303Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T09:29:17.138322Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T09:29:17.138572Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-24T09:29:17.138771Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:29:17.138839Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:29:17.139402Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T09:29:17.139722Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T09:29:17.139802Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T09:29:17.139928Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T09:29:17.140002Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T09:29:18.118721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T09:29:18.118771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T09:29:18.118803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-24T09:29:18.120431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T09:29:18.120453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-24T09:29:18.120467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-24T09:29:18.120479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-24T09:29:18.130405Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-767267 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T09:29:18.130612Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T09:29:18.136677Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T09:29:18.141185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-24T09:29:18.146446Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T09:29:18.146485Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T09:29:18.153171Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:30:07 up  1:12,  0 user,  load average: 3.89, 3.35, 2.21
	Linux old-k8s-version-767267 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [110b0f1e92e3b1d9592bcc10ac6ba1b1ffa82c44f520bca311539c7415f55584] <==
	I1124 09:29:21.052207       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:29:21.052463       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 09:29:21.052599       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:29:21.052614       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:29:21.052636       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:29:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:29:21.258776       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:29:21.258821       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:29:21.258835       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:29:21.258988       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:29:21.551245       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:29:21.551279       1 metrics.go:72] Registering metrics
	I1124 09:29:21.551501       1 controller.go:711] "Syncing nftables rules"
	I1124 09:29:31.266425       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:29:31.266490       1 main.go:301] handling current node
	I1124 09:29:41.258547       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:29:41.258592       1 main.go:301] handling current node
	I1124 09:29:51.258436       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:29:51.258474       1 main.go:301] handling current node
	I1124 09:30:01.260315       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:30:01.260395       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f2432e65f01bff8ce99f46c9bef8a0c8d04e2a92a461bc81ed22a74e42f65cb1] <==
	I1124 09:29:19.606760       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:29:19.617721       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 09:29:19.653610       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 09:29:19.653688       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 09:29:19.653607       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 09:29:19.654433       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 09:29:19.654595       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 09:29:19.653617       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1124 09:29:19.654735       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 09:29:19.655101       1 aggregator.go:166] initial CRD sync complete...
	I1124 09:29:19.655109       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 09:29:19.655115       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:29:19.655122       1 cache.go:39] Caches are synced for autoregister controller
	E1124 09:29:19.659787       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:29:20.518245       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 09:29:20.547878       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 09:29:20.555960       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:29:20.565436       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:29:20.573654       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:29:20.580258       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 09:29:20.615311       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.17.236"}
	I1124 09:29:20.643970       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.249.208"}
	I1124 09:29:32.308630       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 09:29:32.343292       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:29:32.436525       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d7f9989bef9cdec8bcf2a9dc31466db9a4b0ee30c0360721775fc7e2491ff2b2] <==
	I1124 09:29:32.475699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="34.928193ms"
	I1124 09:29:32.492212       1 shared_informer.go:318] Caches are synced for disruption
	I1124 09:29:32.494397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.174408ms"
	I1124 09:29:32.494610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.02µs"
	I1124 09:29:32.494664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="25.896µs"
	I1124 09:29:32.498121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.374819ms"
	I1124 09:29:32.498861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.768µs"
	I1124 09:29:32.507381       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.198052ms"
	I1124 09:29:32.507595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="82.662µs"
	I1124 09:29:32.516622       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 09:29:32.520785       1 shared_informer.go:318] Caches are synced for job
	I1124 09:29:32.549898       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1124 09:29:32.549917       1 shared_informer.go:318] Caches are synced for cronjob
	I1124 09:29:32.867963       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 09:29:32.870238       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 09:29:32.870276       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 09:29:37.599532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.306637ms"
	I1124 09:29:37.599690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="102.738µs"
	I1124 09:29:40.587523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.485µs"
	I1124 09:29:41.631640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.001µs"
	I1124 09:29:42.594375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.626µs"
	I1124 09:29:51.680023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.059622ms"
	I1124 09:29:51.680149       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.761µs"
	I1124 09:29:54.640223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="90.038µs"
	I1124 09:30:02.777575       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.465µs"
	
	
	==> kube-proxy [5279dc42578ddbdd88a86d520878aa8c2388ac69fa692cff0d5c39910b815079] <==
	I1124 09:29:20.851751       1 server_others.go:69] "Using iptables proxy"
	I1124 09:29:20.863291       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1124 09:29:20.881822       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:29:20.884132       1 server_others.go:152] "Using iptables Proxier"
	I1124 09:29:20.884175       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 09:29:20.884183       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 09:29:20.884211       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 09:29:20.884635       1 server.go:846] "Version info" version="v1.28.0"
	I1124 09:29:20.884657       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:29:20.885311       1 config.go:188] "Starting service config controller"
	I1124 09:29:20.885381       1 config.go:315] "Starting node config controller"
	I1124 09:29:20.885401       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 09:29:20.885404       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 09:29:20.885534       1 config.go:97] "Starting endpoint slice config controller"
	I1124 09:29:20.885555       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 09:29:20.986371       1 shared_informer.go:318] Caches are synced for node config
	I1124 09:29:20.986412       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 09:29:20.986427       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d8a882f20879e8f43298374296d4ea577c4a71dd2a327551055374134f9728dc] <==
	I1124 09:29:18.272445       1 serving.go:348] Generated self-signed cert in-memory
	W1124 09:29:19.579007       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:29:19.579043       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:29:19.579056       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:29:19.579067       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:29:19.605057       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1124 09:29:19.605131       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:29:19.608825       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:29:19.608885       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 09:29:19.613558       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1124 09:29:19.613653       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1124 09:29:19.709798       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 09:29:32 old-k8s-version-767267 kubelet[725]: I1124 09:29:32.578605     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2140f28a-310d-48ca-ab87-329ddfaaf554-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-4mz29\" (UID: \"2140f28a-310d-48ca-ab87-329ddfaaf554\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4mz29"
	Nov 24 09:29:32 old-k8s-version-767267 kubelet[725]: I1124 09:29:32.578675     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5t8\" (UniqueName: \"kubernetes.io/projected/2140f28a-310d-48ca-ab87-329ddfaaf554-kube-api-access-7l5t8\") pod \"kubernetes-dashboard-8694d4445c-4mz29\" (UID: \"2140f28a-310d-48ca-ab87-329ddfaaf554\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4mz29"
	Nov 24 09:29:32 old-k8s-version-767267 kubelet[725]: I1124 09:29:32.578757     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c1449d41-8d79-47a8-ad01-9be6b14fee6a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-b4gsn\" (UID: \"c1449d41-8d79-47a8-ad01-9be6b14fee6a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn"
	Nov 24 09:29:32 old-k8s-version-767267 kubelet[725]: I1124 09:29:32.578831     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v592p\" (UniqueName: \"kubernetes.io/projected/c1449d41-8d79-47a8-ad01-9be6b14fee6a-kube-api-access-v592p\") pod \"dashboard-metrics-scraper-5f989dc9cf-b4gsn\" (UID: \"c1449d41-8d79-47a8-ad01-9be6b14fee6a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn"
	Nov 24 09:29:40 old-k8s-version-767267 kubelet[725]: I1124 09:29:40.573012     725 scope.go:117] "RemoveContainer" containerID="cc0bf9bb08c0830e0dcb4d066e8ece53c9cb4a82cc50215f9de192309b128863"
	Nov 24 09:29:40 old-k8s-version-767267 kubelet[725]: I1124 09:29:40.587302     725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4mz29" podStartSLOduration=4.276921148 podCreationTimestamp="2025-11-24 09:29:32 +0000 UTC" firstStartedPulling="2025-11-24 09:29:32.787699612 +0000 UTC m=+16.499055991" lastFinishedPulling="2025-11-24 09:29:37.098020317 +0000 UTC m=+20.809376688" observedRunningTime="2025-11-24 09:29:37.587533815 +0000 UTC m=+21.298890201" watchObservedRunningTime="2025-11-24 09:29:40.587241845 +0000 UTC m=+24.298598231"
	Nov 24 09:29:41 old-k8s-version-767267 kubelet[725]: I1124 09:29:41.577976     725 scope.go:117] "RemoveContainer" containerID="cc0bf9bb08c0830e0dcb4d066e8ece53c9cb4a82cc50215f9de192309b128863"
	Nov 24 09:29:41 old-k8s-version-767267 kubelet[725]: I1124 09:29:41.578173     725 scope.go:117] "RemoveContainer" containerID="2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331"
	Nov 24 09:29:41 old-k8s-version-767267 kubelet[725]: E1124 09:29:41.578566     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b4gsn_kubernetes-dashboard(c1449d41-8d79-47a8-ad01-9be6b14fee6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn" podUID="c1449d41-8d79-47a8-ad01-9be6b14fee6a"
	Nov 24 09:29:42 old-k8s-version-767267 kubelet[725]: I1124 09:29:42.582645     725 scope.go:117] "RemoveContainer" containerID="2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331"
	Nov 24 09:29:42 old-k8s-version-767267 kubelet[725]: E1124 09:29:42.583028     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b4gsn_kubernetes-dashboard(c1449d41-8d79-47a8-ad01-9be6b14fee6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn" podUID="c1449d41-8d79-47a8-ad01-9be6b14fee6a"
	Nov 24 09:29:43 old-k8s-version-767267 kubelet[725]: I1124 09:29:43.585329     725 scope.go:117] "RemoveContainer" containerID="2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331"
	Nov 24 09:29:43 old-k8s-version-767267 kubelet[725]: E1124 09:29:43.585769     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b4gsn_kubernetes-dashboard(c1449d41-8d79-47a8-ad01-9be6b14fee6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn" podUID="c1449d41-8d79-47a8-ad01-9be6b14fee6a"
	Nov 24 09:29:51 old-k8s-version-767267 kubelet[725]: I1124 09:29:51.603820     725 scope.go:117] "RemoveContainer" containerID="c81d8b42b91ba1e59e1e5ef89bc4ef1d3ca5e91535e688ab4c2256422b06c771"
	Nov 24 09:29:54 old-k8s-version-767267 kubelet[725]: I1124 09:29:54.470162     725 scope.go:117] "RemoveContainer" containerID="2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331"
	Nov 24 09:29:54 old-k8s-version-767267 kubelet[725]: I1124 09:29:54.619784     725 scope.go:117] "RemoveContainer" containerID="2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331"
	Nov 24 09:29:54 old-k8s-version-767267 kubelet[725]: I1124 09:29:54.620778     725 scope.go:117] "RemoveContainer" containerID="d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593"
	Nov 24 09:29:54 old-k8s-version-767267 kubelet[725]: E1124 09:29:54.621532     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b4gsn_kubernetes-dashboard(c1449d41-8d79-47a8-ad01-9be6b14fee6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn" podUID="c1449d41-8d79-47a8-ad01-9be6b14fee6a"
	Nov 24 09:30:02 old-k8s-version-767267 kubelet[725]: I1124 09:30:02.767255     725 scope.go:117] "RemoveContainer" containerID="d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593"
	Nov 24 09:30:02 old-k8s-version-767267 kubelet[725]: E1124 09:30:02.767716     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b4gsn_kubernetes-dashboard(c1449d41-8d79-47a8-ad01-9be6b14fee6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn" podUID="c1449d41-8d79-47a8-ad01-9be6b14fee6a"
	Nov 24 09:30:05 old-k8s-version-767267 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:30:05 old-k8s-version-767267 kubelet[725]: I1124 09:30:05.294215     725 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 09:30:05 old-k8s-version-767267 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:30:05 old-k8s-version-767267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:30:05 old-k8s-version-767267 systemd[1]: kubelet.service: Consumed 1.484s CPU time.
	
	
	==> kubernetes-dashboard [c1607c95af442b8f122f80ce26959c6a88568b9ef8983223eaf0f1c71d0f3da6] <==
	2025/11/24 09:29:37 Using namespace: kubernetes-dashboard
	2025/11/24 09:29:37 Using in-cluster config to connect to apiserver
	2025/11/24 09:29:37 Using secret token for csrf signing
	2025/11/24 09:29:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 09:29:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 09:29:37 Successful initial request to the apiserver, version: v1.28.0
	2025/11/24 09:29:37 Generating JWE encryption key
	2025/11/24 09:29:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 09:29:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 09:29:37 Initializing JWE encryption key from synchronized object
	2025/11/24 09:29:37 Creating in-cluster Sidecar client
	2025/11/24 09:29:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:29:37 Serving insecurely on HTTP port: 9090
	2025/11/24 09:30:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:29:37 Starting overwatch
	
	
	==> storage-provisioner [66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017] <==
	I1124 09:29:51.662640       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:29:51.671749       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:29:51.671800       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [c81d8b42b91ba1e59e1e5ef89bc4ef1d3ca5e91535e688ab4c2256422b06c771] <==
	I1124 09:29:20.827888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:29:50.830395       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-767267 -n old-k8s-version-767267
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-767267 -n old-k8s-version-767267: exit status 2 (444.962903ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-767267 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-767267
helpers_test.go:243: (dbg) docker inspect old-k8s-version-767267:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558",
	        "Created": "2025-11-24T09:27:59.477215384Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 330690,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:29:09.801452132Z",
	            "FinishedAt": "2025-11-24T09:29:08.869079826Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/hostname",
	        "HostsPath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/hosts",
	        "LogPath": "/var/lib/docker/containers/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558/b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558-json.log",
	        "Name": "/old-k8s-version-767267",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-767267:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-767267",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b2fbca5819e3f5a42cff7fb188ccf3de5247a5f7ac7e295e37e58ae1e799c558",
	                "LowerDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfcf45933caecf8e7d2b243d198f9bc2d5400293f55e5cd9b0e7b48d7d34caac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-767267",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-767267/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-767267",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-767267",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-767267",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8b952731a880517011162eca6cbab45544899b359c1ed5711fd0c21a59f3d9a1",
	            "SandboxKey": "/var/run/docker/netns/8b952731a880",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-767267": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49a891848d14199803dd04f544287d94ca351d74be411134145450566451080b",
	                    "EndpointID": "cbc015a94f8eea4db6cf39cb2a2bf7bd2ec35605296cf437d6f7ded17b3f666a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:58:55:85:9a:02",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-767267",
	                        "b2fbca5819e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-767267 -n old-k8s-version-767267
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-767267 -n old-k8s-version-767267: exit status 2 (382.501716ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-767267 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-767267 logs -n 25: (1.333323363s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-949664 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ ssh     │ -p bridge-949664 sudo crio config                                                                                                                                                                                                                    │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ delete  │ -p bridge-949664                                                                                                                                                                                                                                     │ bridge-949664                │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:28 UTC │
	│ stop    │ -p old-k8s-version-767267 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p no-preload-938348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │                     │
	│ stop    │ -p no-preload-938348 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:28 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-767267 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p no-preload-938348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p kubernetes-upgrade-967467                                                                                                                                                                                                                         │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-164377 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p newest-cni-639420 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-639420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:30:06
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:30:06.455406  346330 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:06.455795  346330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:06.455815  346330 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:06.455822  346330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:06.456125  346330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:06.456847  346330 out.go:368] Setting JSON to false
	I1124 09:30:06.458780  346330 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4352,"bootTime":1763972254,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:30:06.458858  346330 start.go:143] virtualization: kvm guest
	I1124 09:30:06.460773  346330 out.go:179] * [default-k8s-diff-port-164377] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:30:06.461935  346330 notify.go:221] Checking for updates...
	I1124 09:30:06.461943  346330 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:30:06.463463  346330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:30:06.464783  346330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:06.466213  346330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:30:06.467544  346330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:30:06.468816  346330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:30:06.470709  346330 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:06.471583  346330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:30:06.501679  346330 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:30:06.501765  346330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:06.565053  346330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:06.554042895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:06.565207  346330 docker.go:319] overlay module found
	I1124 09:30:06.574530  346330 out.go:179] * Using the docker driver based on existing profile
	I1124 09:30:06.575969  346330 start.go:309] selected driver: docker
	I1124 09:30:06.575991  346330 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:06.576130  346330 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:30:06.577045  346330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:06.647032  346330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:06.636324087 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:06.647448  346330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:06.647496  346330 cni.go:84] Creating CNI manager for ""
	I1124 09:30:06.647589  346330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:06.647657  346330 start.go:353] cluster config:
	{Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:06.650533  346330 out.go:179] * Starting "default-k8s-diff-port-164377" primary control-plane node in "default-k8s-diff-port-164377" cluster
	I1124 09:30:06.651739  346330 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:30:06.653211  346330 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:30:06.654363  346330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:06.654400  346330 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:30:06.654410  346330 cache.go:65] Caching tarball of preloaded images
	I1124 09:30:06.654487  346330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:30:06.654511  346330 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:30:06.654523  346330 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:30:06.654642  346330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/config.json ...
	I1124 09:30:06.679071  346330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:30:06.679111  346330 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:30:06.679134  346330 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:30:06.679171  346330 start.go:360] acquireMachinesLock for default-k8s-diff-port-164377: {Name:mkd718f87c8feaecdc5abdde6ac9abecef458b31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:30:06.679247  346330 start.go:364] duration metric: took 41.913µs to acquireMachinesLock for "default-k8s-diff-port-164377"
	I1124 09:30:06.679271  346330 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:30:06.679283  346330 fix.go:54] fixHost starting: 
	I1124 09:30:06.679552  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:06.700873  346330 fix.go:112] recreateIfNeeded on default-k8s-diff-port-164377: state=Stopped err=<nil>
	W1124 09:30:06.700907  346330 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:30:05.577240  344762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:30:05.577593  344762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:30:05.730192  344762 ssh_runner.go:195] Run: systemctl --version
	I1124 09:30:05.737102  344762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:30:05.777974  344762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:30:05.783787  344762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:30:05.783922  344762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:30:05.793653  344762 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:30:05.793678  344762 start.go:496] detecting cgroup driver to use...
	I1124 09:30:05.793712  344762 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:30:05.793760  344762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:30:05.811401  344762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:30:05.826362  344762 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:30:05.826431  344762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:30:05.842914  344762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:30:05.857052  344762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:30:05.940892  344762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:30:06.021822  344762 docker.go:234] disabling docker service ...
	I1124 09:30:06.021882  344762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:30:06.037630  344762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:30:06.051983  344762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:30:06.166624  344762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:30:06.261959  344762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:30:06.275797  344762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:30:06.290451  344762 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:06.456737  344762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:30:06.456807  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.467493  344762 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:30:06.467549  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.479222  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.489422  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.500256  344762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:30:06.509775  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.519592  344762 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.532045  344762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:06.544442  344762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:30:06.553889  344762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:30:06.562531  344762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:06.662552  344762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:30:06.816154  344762 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:30:06.816221  344762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:30:06.821909  344762 start.go:564] Will wait 60s for crictl version
	I1124 09:30:06.821961  344762 ssh_runner.go:195] Run: which crictl
	I1124 09:30:06.826969  344762 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:30:06.863985  344762 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:30:06.864134  344762 ssh_runner.go:195] Run: crio --version
	I1124 09:30:06.898441  344762 ssh_runner.go:195] Run: crio --version
	I1124 09:30:06.934510  344762 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:30:06.935829  344762 cli_runner.go:164] Run: docker network inspect newest-cni-639420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:06.955142  344762 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:06.959523  344762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:06.972271  344762 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 09:30:06.973671  344762 kubeadm.go:884] updating cluster {Name:newest-cni-639420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:06.973948  344762 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:07.155692  344762 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:07.326251  344762 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:07.503116  344762 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:30:07.503162  344762 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:07.541771  344762 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:07.541790  344762 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:07.541797  344762 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1124 09:30:07.541893  344762 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-639420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:07.541958  344762 ssh_runner.go:195] Run: crio config
	I1124 09:30:07.590642  344762 cni.go:84] Creating CNI manager for ""
	I1124 09:30:07.590666  344762 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:07.590685  344762 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 09:30:07.590714  344762 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-639420 NodeName:newest-cni-639420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:07.590885  344762 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-639420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:07.590959  344762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:30:07.600024  344762 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:07.600081  344762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:07.608204  344762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1124 09:30:07.621130  344762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:30:07.634520  344762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1124 09:30:07.647838  344762 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:07.651703  344762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:07.663387  344762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:07.751938  344762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:07.775924  344762 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420 for IP: 192.168.103.2
	I1124 09:30:07.775944  344762 certs.go:195] generating shared ca certs ...
	I1124 09:30:07.775963  344762 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:07.776154  344762 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:07.776221  344762 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:07.776234  344762 certs.go:257] generating profile certs ...
	I1124 09:30:07.776360  344762 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/client.key
	I1124 09:30:07.776437  344762 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key.145b87e5
	I1124 09:30:07.776493  344762 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.key
	I1124 09:30:07.776629  344762 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:07.776670  344762 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:07.776684  344762 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:07.776718  344762 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:07.776753  344762 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:07.776790  344762 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:07.776845  344762 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:07.777693  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:07.799818  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:07.819078  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:07.837452  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:07.858805  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:30:07.883414  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:30:07.902254  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:07.921120  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/newest-cni-639420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:30:07.939028  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:07.956598  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:07.976655  344762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:07.995786  344762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:08.009901  344762 ssh_runner.go:195] Run: openssl version
	I1124 09:30:08.016916  344762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:08.026120  344762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:08.029853  344762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:08.029896  344762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:08.068933  344762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:08.077902  344762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:08.087302  344762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:08.091199  344762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:08.091254  344762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:08.129300  344762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:08.137593  344762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:08.147674  344762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:08.151793  344762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:08.151849  344762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:08.188985  344762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:08.198161  344762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:08.202377  344762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:30:08.240094  344762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:30:08.281486  344762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:30:08.337560  344762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:30:08.393589  344762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:30:08.452657  344762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:30:08.503414  344762 kubeadm.go:401] StartCluster: {Name:newest-cni-639420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-639420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:08.503578  344762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:08.503672  344762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:08.533258  344762 cri.go:89] found id: "88c7ce3d6164f5dfd7c2cc0943164705cb7b0ebb9f192f92ef4d886c82cf2a0e"
	I1124 09:30:08.533283  344762 cri.go:89] found id: "8e4139d1654f00b2c6ab22c36ff38c33600c3a84dfef0d03739ea6736a42c583"
	I1124 09:30:08.533288  344762 cri.go:89] found id: "30a6e3c64639b370b90b744c74ab8c105c5d29c1da8e1514736b486aa759bfeb"
	I1124 09:30:08.533293  344762 cri.go:89] found id: "9184b94b256251b919191181e6796e324ba18bae4cbf3a0f2119b9a42fec5ca3"
	I1124 09:30:08.533298  344762 cri.go:89] found id: ""
	I1124 09:30:08.533353  344762 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:30:08.545519  344762 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:08Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:08.545600  344762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:08.554524  344762 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:30:08.554542  344762 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:30:08.554582  344762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:30:08.562869  344762 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:30:08.563722  344762 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-639420" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:08.564210  344762 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-639420" cluster setting kubeconfig missing "newest-cni-639420" context setting]
	I1124 09:30:08.565113  344762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:08.566964  344762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:30:08.576455  344762 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1124 09:30:08.576482  344762 kubeadm.go:602] duration metric: took 21.935121ms to restartPrimaryControlPlane
	I1124 09:30:08.576491  344762 kubeadm.go:403] duration metric: took 73.085973ms to StartCluster
	I1124 09:30:08.576576  344762 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:08.576641  344762 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:08.577765  344762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:08.577986  344762 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:08.578057  344762 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:08.578174  344762 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-639420"
	I1124 09:30:08.578193  344762 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-639420"
	W1124 09:30:08.578201  344762 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:30:08.578205  344762 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:08.578230  344762 host.go:66] Checking if "newest-cni-639420" exists ...
	I1124 09:30:08.578240  344762 addons.go:70] Setting dashboard=true in profile "newest-cni-639420"
	I1124 09:30:08.578250  344762 addons.go:239] Setting addon dashboard=true in "newest-cni-639420"
	W1124 09:30:08.578255  344762 addons.go:248] addon dashboard should already be in state true
	I1124 09:30:08.578271  344762 host.go:66] Checking if "newest-cni-639420" exists ...
	I1124 09:30:08.578371  344762 addons.go:70] Setting default-storageclass=true in profile "newest-cni-639420"
	I1124 09:30:08.578389  344762 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-639420"
	I1124 09:30:08.578717  344762 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:30:08.578722  344762 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:30:08.579047  344762 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:30:08.583708  344762 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:08.584929  344762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:08.614642  344762 addons.go:239] Setting addon default-storageclass=true in "newest-cni-639420"
	W1124 09:30:08.614730  344762 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:30:08.614759  344762 host.go:66] Checking if "newest-cni-639420" exists ...
	I1124 09:30:08.615397  344762 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:30:08.618309  344762 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:30:08.621089  344762 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:30:08.622277  344762 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:30:08.622297  344762 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:30:08.622400  344762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:30:08.634379  344762 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 24 09:29:40 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:40.623298341Z" level=info msg="Started container" PID=1738 containerID=2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper id=243a45c7-bcdc-4395-99c0-757ba44e5d9e name=/runtime.v1.RuntimeService/StartContainer sandboxID=74bf9e1325f1e1e0a270d4e60e47d58a48216492644f7e246ab2e0ae5c2e9b16
	Nov 24 09:29:41 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:41.579273361Z" level=info msg="Removing container: cc0bf9bb08c0830e0dcb4d066e8ece53c9cb4a82cc50215f9de192309b128863" id=d709fa38-2dde-4aa1-937a-9d58190acc14 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:41 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:41.635121801Z" level=info msg="Removed container cc0bf9bb08c0830e0dcb4d066e8ece53c9cb4a82cc50215f9de192309b128863: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper" id=d709fa38-2dde-4aa1-937a-9d58190acc14 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.604281165Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f5ff9aec-9aa1-4b7a-8c62-1af04a8e50b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.605108497Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=22a4bca8-7217-47c1-836d-92609b442f7a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.606037156Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a2ccfec8-1979-44a2-a776-bf3236c67c23 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.606155902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.611737212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.611868457Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/60a581367fab2423814aad44feee2891ed572c16465331ed1624b54cb00bac22/merged/etc/passwd: no such file or directory"
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.611891721Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/60a581367fab2423814aad44feee2891ed572c16465331ed1624b54cb00bac22/merged/etc/group: no such file or directory"
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.612151161Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.647221086Z" level=info msg="Created container 66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017: kube-system/storage-provisioner/storage-provisioner" id=a2ccfec8-1979-44a2-a776-bf3236c67c23 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.647800795Z" level=info msg="Starting container: 66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017" id=b2f1e3b5-747b-437e-91f7-af64c3a2554a name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:51 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:51.649429839Z" level=info msg="Started container" PID=1752 containerID=66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017 description=kube-system/storage-provisioner/storage-provisioner id=b2f1e3b5-747b-437e-91f7-af64c3a2554a name=/runtime.v1.RuntimeService/StartContainer sandboxID=215528199b5cb12333a234ce47e0f98e1d6b967783b352f41089674626ab7236
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.470831135Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b8470324-fb24-4ae6-89ae-29aac724d08c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.471822087Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1c3e1586-c897-4a69-a285-d1543151c368 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.472837595Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper" id=cee4d5af-70d6-4ff1-9aa6-837dfac54c95 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.472963476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.478952398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.479410611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.507828164Z" level=info msg="Created container d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper" id=cee4d5af-70d6-4ff1-9aa6-837dfac54c95 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.508479272Z" level=info msg="Starting container: d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593" id=01b696b3-b418-4441-9016-5f35e06934e7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.510487271Z" level=info msg="Started container" PID=1787 containerID=d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper id=01b696b3-b418-4441-9016-5f35e06934e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=74bf9e1325f1e1e0a270d4e60e47d58a48216492644f7e246ab2e0ae5c2e9b16
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.62639706Z" level=info msg="Removing container: 2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331" id=48fdc600-fe34-4113-8057-af435c24902f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:54 old-k8s-version-767267 crio[566]: time="2025-11-24T09:29:54.639121406Z" level=info msg="Removed container 2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn/dashboard-metrics-scraper" id=48fdc600-fe34-4113-8057-af435c24902f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	d0f8be0ae8c61       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   74bf9e1325f1e       dashboard-metrics-scraper-5f989dc9cf-b4gsn       kubernetes-dashboard
	66c06981b7cda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   215528199b5cb       storage-provisioner                              kube-system
	c1607c95af442       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   018f5126a36db       kubernetes-dashboard-8694d4445c-4mz29            kubernetes-dashboard
	fa632ef36d7da       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   42f4d2ebf8485       busybox                                          default
	6b40170103002       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   790677be60237       coredns-5dd5756b68-gmgwv                         kube-system
	110b0f1e92e3b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   96021fa6494ed       kindnet-8tdrm                                    kube-system
	c81d8b42b91ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   215528199b5cb       storage-provisioner                              kube-system
	5279dc42578dd       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   5c947f538b610       kube-proxy-b8kgc                                 kube-system
	d7f9989bef9cd       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   63422659353e5       kube-controller-manager-old-k8s-version-767267   kube-system
	957db9e46cf07       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   96679a945021e       etcd-old-k8s-version-767267                      kube-system
	d8a882f20879e       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   1510f6cb4f751       kube-scheduler-old-k8s-version-767267            kube-system
	f2432e65f01bf       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   cd7df9a0659a2       kube-apiserver-old-k8s-version-767267            kube-system
	
	
	==> coredns [6b40170103002c57e488dd5ac0b91b8f6f8b44bfab54d31beec69f718e520ca1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59685 - 28119 "HINFO IN 935536319946958349.8752353859356753209. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.015574588s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-767267
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-767267
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=old-k8s-version-767267
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_28_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:28:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-767267
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:30:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:29:50 +0000   Mon, 24 Nov 2025 09:28:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:29:50 +0000   Mon, 24 Nov 2025 09:28:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:29:50 +0000   Mon, 24 Nov 2025 09:28:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:29:50 +0000   Mon, 24 Nov 2025 09:28:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-767267
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                f1cd3fa8-d2f0-4c2f-8873-1620b1eea27a
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-gmgwv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-767267                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-8tdrm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-767267             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-767267    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-b8kgc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-767267             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-b4gsn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4mz29             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-767267 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-767267 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-767267 event: Registered Node old-k8s-version-767267 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-767267 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node old-k8s-version-767267 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node old-k8s-version-767267 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-767267 event: Registered Node old-k8s-version-767267 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [957db9e46cf07c1058ffd4395c982714d8f71f43483a3d024a3aed61ac25b6da] <==
	{"level":"info","ts":"2025-11-24T09:29:17.13805Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T09:29:17.138303Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T09:29:17.138322Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T09:29:17.138572Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-24T09:29:17.138771Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:29:17.138839Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:29:17.139402Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T09:29:17.139722Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T09:29:17.139802Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T09:29:17.139928Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T09:29:17.140002Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T09:29:18.118721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T09:29:18.118771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T09:29:18.118803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-24T09:29:18.120431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T09:29:18.120453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-24T09:29:18.120467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-24T09:29:18.120479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-24T09:29:18.130405Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-767267 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T09:29:18.130612Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T09:29:18.136677Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T09:29:18.141185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-24T09:29:18.146446Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T09:29:18.146485Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T09:29:18.153171Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:30:10 up  1:12,  0 user,  load average: 3.89, 3.35, 2.21
	Linux old-k8s-version-767267 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [110b0f1e92e3b1d9592bcc10ac6ba1b1ffa82c44f520bca311539c7415f55584] <==
	I1124 09:29:21.052207       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:29:21.052463       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 09:29:21.052599       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:29:21.052614       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:29:21.052636       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:29:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:29:21.258776       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:29:21.258821       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:29:21.258835       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:29:21.258988       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:29:21.551245       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:29:21.551279       1 metrics.go:72] Registering metrics
	I1124 09:29:21.551501       1 controller.go:711] "Syncing nftables rules"
	I1124 09:29:31.266425       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:29:31.266490       1 main.go:301] handling current node
	I1124 09:29:41.258547       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:29:41.258592       1 main.go:301] handling current node
	I1124 09:29:51.258436       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:29:51.258474       1 main.go:301] handling current node
	I1124 09:30:01.260315       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:30:01.260395       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f2432e65f01bff8ce99f46c9bef8a0c8d04e2a92a461bc81ed22a74e42f65cb1] <==
	I1124 09:29:19.606760       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:29:19.617721       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 09:29:19.653610       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 09:29:19.653688       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 09:29:19.653607       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 09:29:19.654433       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 09:29:19.654595       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 09:29:19.653617       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1124 09:29:19.654735       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 09:29:19.655101       1 aggregator.go:166] initial CRD sync complete...
	I1124 09:29:19.655109       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 09:29:19.655115       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:29:19.655122       1 cache.go:39] Caches are synced for autoregister controller
	E1124 09:29:19.659787       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:29:20.518245       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 09:29:20.547878       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 09:29:20.555960       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:29:20.565436       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:29:20.573654       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:29:20.580258       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 09:29:20.615311       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.17.236"}
	I1124 09:29:20.643970       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.249.208"}
	I1124 09:29:32.308630       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 09:29:32.343292       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:29:32.436525       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d7f9989bef9cdec8bcf2a9dc31466db9a4b0ee30c0360721775fc7e2491ff2b2] <==
	I1124 09:29:32.475699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="34.928193ms"
	I1124 09:29:32.492212       1 shared_informer.go:318] Caches are synced for disruption
	I1124 09:29:32.494397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.174408ms"
	I1124 09:29:32.494610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.02µs"
	I1124 09:29:32.494664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="25.896µs"
	I1124 09:29:32.498121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.374819ms"
	I1124 09:29:32.498861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.768µs"
	I1124 09:29:32.507381       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.198052ms"
	I1124 09:29:32.507595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="82.662µs"
	I1124 09:29:32.516622       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 09:29:32.520785       1 shared_informer.go:318] Caches are synced for job
	I1124 09:29:32.549898       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1124 09:29:32.549917       1 shared_informer.go:318] Caches are synced for cronjob
	I1124 09:29:32.867963       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 09:29:32.870238       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 09:29:32.870276       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 09:29:37.599532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.306637ms"
	I1124 09:29:37.599690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="102.738µs"
	I1124 09:29:40.587523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.485µs"
	I1124 09:29:41.631640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.001µs"
	I1124 09:29:42.594375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.626µs"
	I1124 09:29:51.680023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.059622ms"
	I1124 09:29:51.680149       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.761µs"
	I1124 09:29:54.640223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="90.038µs"
	I1124 09:30:02.777575       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.465µs"
	
	
	==> kube-proxy [5279dc42578ddbdd88a86d520878aa8c2388ac69fa692cff0d5c39910b815079] <==
	I1124 09:29:20.851751       1 server_others.go:69] "Using iptables proxy"
	I1124 09:29:20.863291       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1124 09:29:20.881822       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:29:20.884132       1 server_others.go:152] "Using iptables Proxier"
	I1124 09:29:20.884175       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 09:29:20.884183       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 09:29:20.884211       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 09:29:20.884635       1 server.go:846] "Version info" version="v1.28.0"
	I1124 09:29:20.884657       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:29:20.885311       1 config.go:188] "Starting service config controller"
	I1124 09:29:20.885381       1 config.go:315] "Starting node config controller"
	I1124 09:29:20.885401       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 09:29:20.885404       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 09:29:20.885534       1 config.go:97] "Starting endpoint slice config controller"
	I1124 09:29:20.885555       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 09:29:20.986371       1 shared_informer.go:318] Caches are synced for node config
	I1124 09:29:20.986412       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 09:29:20.986427       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d8a882f20879e8f43298374296d4ea577c4a71dd2a327551055374134f9728dc] <==
	I1124 09:29:18.272445       1 serving.go:348] Generated self-signed cert in-memory
	W1124 09:29:19.579007       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:29:19.579043       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:29:19.579056       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:29:19.579067       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:29:19.605057       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1124 09:29:19.605131       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:29:19.608825       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:29:19.608885       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 09:29:19.613558       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1124 09:29:19.613653       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1124 09:29:19.709798       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 09:29:32 old-k8s-version-767267 kubelet[725]: I1124 09:29:32.578605     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2140f28a-310d-48ca-ab87-329ddfaaf554-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-4mz29\" (UID: \"2140f28a-310d-48ca-ab87-329ddfaaf554\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4mz29"
	Nov 24 09:29:32 old-k8s-version-767267 kubelet[725]: I1124 09:29:32.578675     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5t8\" (UniqueName: \"kubernetes.io/projected/2140f28a-310d-48ca-ab87-329ddfaaf554-kube-api-access-7l5t8\") pod \"kubernetes-dashboard-8694d4445c-4mz29\" (UID: \"2140f28a-310d-48ca-ab87-329ddfaaf554\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4mz29"
	Nov 24 09:29:32 old-k8s-version-767267 kubelet[725]: I1124 09:29:32.578757     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c1449d41-8d79-47a8-ad01-9be6b14fee6a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-b4gsn\" (UID: \"c1449d41-8d79-47a8-ad01-9be6b14fee6a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn"
	Nov 24 09:29:32 old-k8s-version-767267 kubelet[725]: I1124 09:29:32.578831     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v592p\" (UniqueName: \"kubernetes.io/projected/c1449d41-8d79-47a8-ad01-9be6b14fee6a-kube-api-access-v592p\") pod \"dashboard-metrics-scraper-5f989dc9cf-b4gsn\" (UID: \"c1449d41-8d79-47a8-ad01-9be6b14fee6a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn"
	Nov 24 09:29:40 old-k8s-version-767267 kubelet[725]: I1124 09:29:40.573012     725 scope.go:117] "RemoveContainer" containerID="cc0bf9bb08c0830e0dcb4d066e8ece53c9cb4a82cc50215f9de192309b128863"
	Nov 24 09:29:40 old-k8s-version-767267 kubelet[725]: I1124 09:29:40.587302     725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4mz29" podStartSLOduration=4.276921148 podCreationTimestamp="2025-11-24 09:29:32 +0000 UTC" firstStartedPulling="2025-11-24 09:29:32.787699612 +0000 UTC m=+16.499055991" lastFinishedPulling="2025-11-24 09:29:37.098020317 +0000 UTC m=+20.809376688" observedRunningTime="2025-11-24 09:29:37.587533815 +0000 UTC m=+21.298890201" watchObservedRunningTime="2025-11-24 09:29:40.587241845 +0000 UTC m=+24.298598231"
	Nov 24 09:29:41 old-k8s-version-767267 kubelet[725]: I1124 09:29:41.577976     725 scope.go:117] "RemoveContainer" containerID="cc0bf9bb08c0830e0dcb4d066e8ece53c9cb4a82cc50215f9de192309b128863"
	Nov 24 09:29:41 old-k8s-version-767267 kubelet[725]: I1124 09:29:41.578173     725 scope.go:117] "RemoveContainer" containerID="2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331"
	Nov 24 09:29:41 old-k8s-version-767267 kubelet[725]: E1124 09:29:41.578566     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b4gsn_kubernetes-dashboard(c1449d41-8d79-47a8-ad01-9be6b14fee6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn" podUID="c1449d41-8d79-47a8-ad01-9be6b14fee6a"
	Nov 24 09:29:42 old-k8s-version-767267 kubelet[725]: I1124 09:29:42.582645     725 scope.go:117] "RemoveContainer" containerID="2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331"
	Nov 24 09:29:42 old-k8s-version-767267 kubelet[725]: E1124 09:29:42.583028     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b4gsn_kubernetes-dashboard(c1449d41-8d79-47a8-ad01-9be6b14fee6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn" podUID="c1449d41-8d79-47a8-ad01-9be6b14fee6a"
	Nov 24 09:29:43 old-k8s-version-767267 kubelet[725]: I1124 09:29:43.585329     725 scope.go:117] "RemoveContainer" containerID="2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331"
	Nov 24 09:29:43 old-k8s-version-767267 kubelet[725]: E1124 09:29:43.585769     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b4gsn_kubernetes-dashboard(c1449d41-8d79-47a8-ad01-9be6b14fee6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn" podUID="c1449d41-8d79-47a8-ad01-9be6b14fee6a"
	Nov 24 09:29:51 old-k8s-version-767267 kubelet[725]: I1124 09:29:51.603820     725 scope.go:117] "RemoveContainer" containerID="c81d8b42b91ba1e59e1e5ef89bc4ef1d3ca5e91535e688ab4c2256422b06c771"
	Nov 24 09:29:54 old-k8s-version-767267 kubelet[725]: I1124 09:29:54.470162     725 scope.go:117] "RemoveContainer" containerID="2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331"
	Nov 24 09:29:54 old-k8s-version-767267 kubelet[725]: I1124 09:29:54.619784     725 scope.go:117] "RemoveContainer" containerID="2f62b38007071708a0b7b9211f627da1181a74063aa35173ae8cd9cd0f961331"
	Nov 24 09:29:54 old-k8s-version-767267 kubelet[725]: I1124 09:29:54.620778     725 scope.go:117] "RemoveContainer" containerID="d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593"
	Nov 24 09:29:54 old-k8s-version-767267 kubelet[725]: E1124 09:29:54.621532     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b4gsn_kubernetes-dashboard(c1449d41-8d79-47a8-ad01-9be6b14fee6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn" podUID="c1449d41-8d79-47a8-ad01-9be6b14fee6a"
	Nov 24 09:30:02 old-k8s-version-767267 kubelet[725]: I1124 09:30:02.767255     725 scope.go:117] "RemoveContainer" containerID="d0f8be0ae8c61c418e98151d1a5313f9a80336d33eb4b98f505d04b8624b4593"
	Nov 24 09:30:02 old-k8s-version-767267 kubelet[725]: E1124 09:30:02.767716     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b4gsn_kubernetes-dashboard(c1449d41-8d79-47a8-ad01-9be6b14fee6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b4gsn" podUID="c1449d41-8d79-47a8-ad01-9be6b14fee6a"
	Nov 24 09:30:05 old-k8s-version-767267 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:30:05 old-k8s-version-767267 kubelet[725]: I1124 09:30:05.294215     725 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 09:30:05 old-k8s-version-767267 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:30:05 old-k8s-version-767267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:30:05 old-k8s-version-767267 systemd[1]: kubelet.service: Consumed 1.484s CPU time.
	
	
	==> kubernetes-dashboard [c1607c95af442b8f122f80ce26959c6a88568b9ef8983223eaf0f1c71d0f3da6] <==
	2025/11/24 09:29:37 Using namespace: kubernetes-dashboard
	2025/11/24 09:29:37 Using in-cluster config to connect to apiserver
	2025/11/24 09:29:37 Using secret token for csrf signing
	2025/11/24 09:29:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 09:29:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 09:29:37 Successful initial request to the apiserver, version: v1.28.0
	2025/11/24 09:29:37 Generating JWE encryption key
	2025/11/24 09:29:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 09:29:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 09:29:37 Initializing JWE encryption key from synchronized object
	2025/11/24 09:29:37 Creating in-cluster Sidecar client
	2025/11/24 09:29:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:29:37 Serving insecurely on HTTP port: 9090
	2025/11/24 09:30:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:29:37 Starting overwatch
	
	
	==> storage-provisioner [66c06981b7cda5c354379b3259a2736a32ea560c046ddac71dc3344f51cc3017] <==
	I1124 09:29:51.662640       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:29:51.671749       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:29:51.671800       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 09:30:09.069831       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:30:09.069969       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e485f296-a460-439b-80f5-d911ee8d6a0d", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-767267_d06f7391-72a3-4ff3-b8ed-af7acafb645b became leader
	I1124 09:30:09.070065       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-767267_d06f7391-72a3-4ff3-b8ed-af7acafb645b!
	I1124 09:30:09.170821       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-767267_d06f7391-72a3-4ff3-b8ed-af7acafb645b!
	
	
	==> storage-provisioner [c81d8b42b91ba1e59e1e5ef89bc4ef1d3ca5e91535e688ab4c2256422b06c771] <==
	I1124 09:29:20.827888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:29:50.830395       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-767267 -n old-k8s-version-767267
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-767267 -n old-k8s-version-767267: exit status 2 (395.45629ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-767267 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-639420 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-639420 --alsologtostderr -v=1: exit status 80 (2.535212177s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-639420 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:30:12.644376  349851 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:12.644497  349851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:12.644506  349851 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:12.644510  349851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:12.644761  349851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:12.645081  349851 out.go:368] Setting JSON to false
	I1124 09:30:12.645106  349851 mustload.go:66] Loading cluster: newest-cni-639420
	I1124 09:30:12.645678  349851 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:12.646160  349851 cli_runner.go:164] Run: docker container inspect newest-cni-639420 --format={{.State.Status}}
	I1124 09:30:12.665345  349851 host.go:66] Checking if "newest-cni-639420" exists ...
	I1124 09:30:12.665615  349851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:12.735219  349851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-24 09:30:12.723017449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:12.735964  349851 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-639420 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 09:30:12.738287  349851 out.go:179] * Pausing node newest-cni-639420 ... 
	I1124 09:30:12.739519  349851 host.go:66] Checking if "newest-cni-639420" exists ...
	I1124 09:30:12.739747  349851 ssh_runner.go:195] Run: systemctl --version
	I1124 09:30:12.739794  349851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-639420
	I1124 09:30:12.759835  349851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/newest-cni-639420/id_rsa Username:docker}
	I1124 09:30:12.868588  349851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:12.882136  349851 pause.go:52] kubelet running: true
	I1124 09:30:12.882193  349851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:30:13.066528  349851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:30:13.066615  349851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:30:13.138813  349851 cri.go:89] found id: "4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1"
	I1124 09:30:13.138846  349851 cri.go:89] found id: "3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454"
	I1124 09:30:13.138852  349851 cri.go:89] found id: "88c7ce3d6164f5dfd7c2cc0943164705cb7b0ebb9f192f92ef4d886c82cf2a0e"
	I1124 09:30:13.138857  349851 cri.go:89] found id: "8e4139d1654f00b2c6ab22c36ff38c33600c3a84dfef0d03739ea6736a42c583"
	I1124 09:30:13.138862  349851 cri.go:89] found id: "30a6e3c64639b370b90b744c74ab8c105c5d29c1da8e1514736b486aa759bfeb"
	I1124 09:30:13.138866  349851 cri.go:89] found id: "9184b94b256251b919191181e6796e324ba18bae4cbf3a0f2119b9a42fec5ca3"
	I1124 09:30:13.138871  349851 cri.go:89] found id: ""
	I1124 09:30:13.138919  349851 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:30:13.151535  349851 retry.go:31] will retry after 259.130938ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:13Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:13.410994  349851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:13.424354  349851 pause.go:52] kubelet running: false
	I1124 09:30:13.424426  349851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:30:13.562442  349851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:30:13.562531  349851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:30:13.632530  349851 cri.go:89] found id: "4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1"
	I1124 09:30:13.632557  349851 cri.go:89] found id: "3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454"
	I1124 09:30:13.632563  349851 cri.go:89] found id: "88c7ce3d6164f5dfd7c2cc0943164705cb7b0ebb9f192f92ef4d886c82cf2a0e"
	I1124 09:30:13.632568  349851 cri.go:89] found id: "8e4139d1654f00b2c6ab22c36ff38c33600c3a84dfef0d03739ea6736a42c583"
	I1124 09:30:13.632570  349851 cri.go:89] found id: "30a6e3c64639b370b90b744c74ab8c105c5d29c1da8e1514736b486aa759bfeb"
	I1124 09:30:13.632574  349851 cri.go:89] found id: "9184b94b256251b919191181e6796e324ba18bae4cbf3a0f2119b9a42fec5ca3"
	I1124 09:30:13.632577  349851 cri.go:89] found id: ""
	I1124 09:30:13.632637  349851 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:30:13.644551  349851 retry.go:31] will retry after 359.301977ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:13Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:14.004777  349851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:14.017841  349851 pause.go:52] kubelet running: false
	I1124 09:30:14.017912  349851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:30:14.148409  349851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:30:14.148490  349851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:30:14.219282  349851 cri.go:89] found id: "4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1"
	I1124 09:30:14.219302  349851 cri.go:89] found id: "3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454"
	I1124 09:30:14.219306  349851 cri.go:89] found id: "88c7ce3d6164f5dfd7c2cc0943164705cb7b0ebb9f192f92ef4d886c82cf2a0e"
	I1124 09:30:14.219315  349851 cri.go:89] found id: "8e4139d1654f00b2c6ab22c36ff38c33600c3a84dfef0d03739ea6736a42c583"
	I1124 09:30:14.219318  349851 cri.go:89] found id: "30a6e3c64639b370b90b744c74ab8c105c5d29c1da8e1514736b486aa759bfeb"
	I1124 09:30:14.219345  349851 cri.go:89] found id: "9184b94b256251b919191181e6796e324ba18bae4cbf3a0f2119b9a42fec5ca3"
	I1124 09:30:14.219352  349851 cri.go:89] found id: ""
	I1124 09:30:14.219395  349851 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:30:14.231906  349851 retry.go:31] will retry after 599.957394ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:14Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:14.832721  349851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:14.847296  349851 pause.go:52] kubelet running: false
	I1124 09:30:14.847400  349851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:30:14.993528  349851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:30:14.993607  349851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:30:15.075215  349851 cri.go:89] found id: "4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1"
	I1124 09:30:15.075238  349851 cri.go:89] found id: "3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454"
	I1124 09:30:15.075245  349851 cri.go:89] found id: "88c7ce3d6164f5dfd7c2cc0943164705cb7b0ebb9f192f92ef4d886c82cf2a0e"
	I1124 09:30:15.075251  349851 cri.go:89] found id: "8e4139d1654f00b2c6ab22c36ff38c33600c3a84dfef0d03739ea6736a42c583"
	I1124 09:30:15.075255  349851 cri.go:89] found id: "30a6e3c64639b370b90b744c74ab8c105c5d29c1da8e1514736b486aa759bfeb"
	I1124 09:30:15.075260  349851 cri.go:89] found id: "9184b94b256251b919191181e6796e324ba18bae4cbf3a0f2119b9a42fec5ca3"
	I1124 09:30:15.075265  349851 cri.go:89] found id: ""
	I1124 09:30:15.075303  349851 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:30:15.101065  349851 out.go:203] 
	W1124 09:30:15.102532  349851 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:30:15.102550  349851 out.go:285] * 
	* 
	W1124 09:30:15.107576  349851 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:30:15.108967  349851 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-639420 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-639420
helpers_test.go:243: (dbg) docker inspect newest-cni-639420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99",
	        "Created": "2025-11-24T09:29:23.35779578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 344974,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:30:00.824883705Z",
	            "FinishedAt": "2025-11-24T09:29:59.921421167Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/hostname",
	        "HostsPath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/hosts",
	        "LogPath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99-json.log",
	        "Name": "/newest-cni-639420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-639420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-639420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99",
	                "LowerDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-639420",
	                "Source": "/var/lib/docker/volumes/newest-cni-639420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-639420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-639420",
	                "name.minikube.sigs.k8s.io": "newest-cni-639420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d077bf915ebc54435b3005856e7d9665aebd5a7efe37f077063f6a8633167193",
	            "SandboxKey": "/var/run/docker/netns/d077bf915ebc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-639420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb5ecbfd413335d9913854ce166d0ab6940e67ee6eb0c6e4edd097241e0aa654",
	                    "EndpointID": "fe09ea7bd4f18a64e1b8c2165c423530980a1231ae0c4c44fadb9ba6f0606e81",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "fe:46:ff:76:63:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-639420",
	                        "71986ab5f5c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639420 -n newest-cni-639420
I1124 09:30:15.190939    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639420 -n newest-cni-639420: exit status 2 (481.549659ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-639420 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-639420 logs -n 25: (1.289962267s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-767267 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p no-preload-938348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p kubernetes-upgrade-967467                                                                                                                                                                                                                         │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-164377 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p newest-cni-639420 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-639420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ newest-cni-639420 image list --format=json                                                                                                                                                                                                           │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p newest-cni-639420 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ image   │ no-preload-938348 image list --format=json                                                                                                                                                                                                           │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p no-preload-938348 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:30:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:30:14.256245  350567 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:14.256374  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256383  350567 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:14.256387  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256590  350567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:14.257068  350567 out.go:368] Setting JSON to false
	I1124 09:30:14.258256  350567 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4360,"bootTime":1763972254,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:30:14.258310  350567 start.go:143] virtualization: kvm guest
	I1124 09:30:14.260266  350567 out.go:179] * [embed-certs-673346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:30:14.261445  350567 notify.go:221] Checking for updates...
	I1124 09:30:14.261485  350567 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:30:14.262753  350567 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:30:14.264083  350567 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:14.265432  350567 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:30:14.266629  350567 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:30:14.268064  350567 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:30:14.269699  350567 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:14.269849  350567 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.269945  350567 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.270033  350567 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:30:14.295962  350567 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:30:14.296062  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.353929  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.34315637 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.354017  350567 docker.go:319] overlay module found
	I1124 09:30:14.355843  350567 out.go:179] * Using the docker driver based on user configuration
	I1124 09:30:14.357036  350567 start.go:309] selected driver: docker
	I1124 09:30:14.357055  350567 start.go:927] validating driver "docker" against <nil>
	I1124 09:30:14.357071  350567 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:30:14.357913  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.421846  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.410748585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.422058  350567 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:30:14.422268  350567 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:14.423788  350567 out.go:179] * Using Docker driver with root privileges
	I1124 09:30:14.424821  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.424879  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.424889  350567 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:30:14.424949  350567 start.go:353] cluster config:
	{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:14.426196  350567 out.go:179] * Starting "embed-certs-673346" primary control-plane node in "embed-certs-673346" cluster
	I1124 09:30:14.427568  350567 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:30:14.428764  350567 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:30:14.430011  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:14.430039  350567 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:30:14.430057  350567 cache.go:65] Caching tarball of preloaded images
	I1124 09:30:14.430101  350567 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:30:14.430158  350567 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:30:14.430171  350567 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:30:14.430275  350567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:30:14.430300  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json: {Name:mk0422b133bc5e40a804c0d52d08ba9c0b2ed1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.453692  350567 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:30:14.453709  350567 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:30:14.453740  350567 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:30:14.453787  350567 start.go:360] acquireMachinesLock for embed-certs-673346: {Name:mke42f7eda6495a6293833a93353c50b3546b267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:30:14.453896  350567 start.go:364] duration metric: took 91.14µs to acquireMachinesLock for "embed-certs-673346"
	I1124 09:30:14.453926  350567 start.go:93] Provisioning new machine with config: &{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:14.453996  350567 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:30:13.147546  346330 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-164377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:13.167771  346330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:13.172050  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:13.182388  346330 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:13.182659  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.335407  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.491838  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.647119  346330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:13.647243  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.846371  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.028841  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.344499  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.385375  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.385396  346330 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:30:14.385438  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.415659  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.415679  346330 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:14.415687  346330 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1124 09:30:14.415796  346330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-164377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:14.415855  346330 ssh_runner.go:195] Run: crio config
	I1124 09:30:14.467415  346330 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.467440  346330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.467457  346330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:30:14.467485  346330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-164377 NodeName:default-k8s-diff-port-164377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:14.467665  346330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-164377"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:14.467740  346330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:30:14.477297  346330 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:14.477386  346330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:14.486666  346330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 09:30:14.501581  346330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:30:14.516622  346330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 09:30:14.531939  346330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:14.536699  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:14.551687  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:14.653461  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:14.689043  346330 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377 for IP: 192.168.85.2
	I1124 09:30:14.689069  346330 certs.go:195] generating shared ca certs ...
	I1124 09:30:14.689088  346330 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.689257  346330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:14.689322  346330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:14.689350  346330 certs.go:257] generating profile certs ...
	I1124 09:30:14.689449  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/client.key
	I1124 09:30:14.689523  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key.5d8312b5
	I1124 09:30:14.689584  346330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key
	I1124 09:30:14.689713  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:14.689756  346330 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:14.689770  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:14.689805  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:14.689846  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:14.689877  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:14.689936  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:14.690834  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:14.713491  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:14.733133  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:14.755304  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:14.781644  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 09:30:14.807149  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:30:14.826555  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:14.849289  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:30:14.868866  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:14.900899  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:14.927265  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:14.951934  346330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:14.968305  346330 ssh_runner.go:195] Run: openssl version
	I1124 09:30:14.977188  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:14.988887  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993793  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993849  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:15.044783  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:15.062885  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:15.073450  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078558  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078611  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.125021  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:15.134555  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:15.145840  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150712  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150766  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.193031  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:15.203009  346330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:15.208170  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:30:15.268668  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:30:15.330529  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:30:15.386730  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:30:15.450702  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:30:15.510222  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:30:15.573346  346330 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:15.573548  346330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:15.573633  346330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:15.617052  346330 cri.go:89] found id: "dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8"
	I1124 09:30:15.617070  346330 cri.go:89] found id: "892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99"
	I1124 09:30:15.617076  346330 cri.go:89] found id: "4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe"
	I1124 09:30:15.617088  346330 cri.go:89] found id: "4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7"
	I1124 09:30:15.617092  346330 cri.go:89] found id: ""
	I1124 09:30:15.617135  346330 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:30:15.636984  346330 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:15Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:15.638440  346330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:15.649204  346330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:30:15.649226  346330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:30:15.649270  346330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:30:15.663887  346330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:30:15.664735  346330 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-164377" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.665194  346330 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-164377" cluster setting kubeconfig missing "default-k8s-diff-port-164377" context setting]
	I1124 09:30:15.666227  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.668691  346330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:30:15.680140  346330 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 09:30:15.680180  346330 kubeadm.go:602] duration metric: took 30.938163ms to restartPrimaryControlPlane
	I1124 09:30:15.680189  346330 kubeadm.go:403] duration metric: took 106.868907ms to StartCluster
	I1124 09:30:15.680202  346330 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.680258  346330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.681803  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.682046  346330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:15.682240  346330 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:15.682422  346330 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682447  346330 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682456  346330 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:30:15.682523  346330 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682554  346330 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682573  346330 addons.go:248] addon dashboard should already be in state true
	I1124 09:30:15.682612  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.682679  346330 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:15.682721  346330 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682735  346330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-164377"
	I1124 09:30:15.683004  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683176  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683179  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.683615  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683830  346330 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:15.685123  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:15.719127  346330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:30:15.719844  346330 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.719950  346330 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:30:15.720006  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.720557  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.721200  346330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:30:15.721225  346330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.160631282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.164314057Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=699acb5f-dff2-43e9-a997-b13efac62e40 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.165088522Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d3f40340-50dc-4e6d-ab77-90ac165d2111 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.166198583Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.166852746Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.16710998Z" level=info msg="Ran pod sandbox 1425a8902f53bc4b08507ca73829d16a4496db69ab46d0e3183460679d2061ae with infra container: kube-system/kindnet-ttw2l/POD" id=699acb5f-dff2-43e9-a997-b13efac62e40 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.167664003Z" level=info msg="Ran pod sandbox 44a65ae6ed7dc3c271b8dd326740d870425bceeb487ffcad6c77eb034f5b6bae with infra container: kube-system/kube-proxy-p6g59/POD" id=d3f40340-50dc-4e6d-ab77-90ac165d2111 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.168466881Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3c478eb5-b141-4a8f-800c-283f3d847102 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.16912563Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=46213bb4-c847-476b-aa25-6820fda501e9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.169566275Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6c26f13b-648a-44d5-9078-55e9303616d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.170205282Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4ad22590-293a-4b67-b72b-9cdf6b4c7aaf name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.170718639Z" level=info msg="Creating container: kube-system/kindnet-ttw2l/kindnet-cni" id=d97cbcd6-b850-4daf-a73a-ab2b9cb316e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.170914435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.172277943Z" level=info msg="Creating container: kube-system/kube-proxy-p6g59/kube-proxy" id=aa5bec21-57df-4811-947c-b69c8b159322 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.172828248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.177853993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.178751982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.182838216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.183504495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.213981428Z" level=info msg="Created container 3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454: kube-system/kindnet-ttw2l/kindnet-cni" id=d97cbcd6-b850-4daf-a73a-ab2b9cb316e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.214734526Z" level=info msg="Starting container: 3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454" id=465f4b91-d0a1-46c5-87b7-3681e32b8c13 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.216527761Z" level=info msg="Started container" PID=1029 containerID=3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454 description=kube-system/kindnet-ttw2l/kindnet-cni id=465f4b91-d0a1-46c5-87b7-3681e32b8c13 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1425a8902f53bc4b08507ca73829d16a4496db69ab46d0e3183460679d2061ae
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.217516414Z" level=info msg="Created container 4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1: kube-system/kube-proxy-p6g59/kube-proxy" id=aa5bec21-57df-4811-947c-b69c8b159322 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.218230606Z" level=info msg="Starting container: 4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1" id=ca2422a3-6ccf-41f0-920b-14d81412195f name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.221391995Z" level=info msg="Started container" PID=1030 containerID=4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1 description=kube-system/kube-proxy-p6g59/kube-proxy id=ca2422a3-6ccf-41f0-920b-14d81412195f name=/runtime.v1.RuntimeService/StartContainer sandboxID=44a65ae6ed7dc3c271b8dd326740d870425bceeb487ffcad6c77eb034f5b6bae
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4444f7831eb2b       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   44a65ae6ed7dc       kube-proxy-p6g59                            kube-system
	3e874950816f8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   1425a8902f53b       kindnet-ttw2l                               kube-system
	88c7ce3d6164f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   eac68c86b88f6       etcd-newest-cni-639420                      kube-system
	8e4139d1654f0       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   77cd9978ba782       kube-scheduler-newest-cni-639420            kube-system
	30a6e3c64639b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   239335ad4196b       kube-controller-manager-newest-cni-639420   kube-system
	9184b94b25625       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   d3a6f21919e94       kube-apiserver-newest-cni-639420            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-639420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-639420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=newest-cni-639420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_29_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:29:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-639420
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:30:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:30:09 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:30:09 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:30:09 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 09:30:09 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-639420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c98ff8f9-f47f-426e-a902-762092513ece
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-639420                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-ttw2l                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-639420             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-639420    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-p6g59                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-639420             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  23s   node-controller  Node newest-cni-639420 event: Registered Node newest-cni-639420 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-639420 event: Registered Node newest-cni-639420 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [88c7ce3d6164f5dfd7c2cc0943164705cb7b0ebb9f192f92ef4d886c82cf2a0e] <==
	{"level":"warn","ts":"2025-11-24T09:30:09.127377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.143963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.153701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.161912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.170084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.178584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.187031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.194114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.201573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.216512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.223671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.230409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.239039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.246510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.253605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.261308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.268420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.275102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.282923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.301569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.305185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.312692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.319955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.326718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.390375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56612","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:30:16 up  1:12,  0 user,  load average: 5.33, 3.67, 2.33
	Linux newest-cni-639420 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454] <==
	I1124 09:30:10.469917       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:30:10.472570       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 09:30:10.472883       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:30:10.472972       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:30:10.473023       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:30:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:30:10.673037       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:30:10.673597       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:30:10.673635       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:30:10.674001       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:30:11.073865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:30:11.073903       1 metrics.go:72] Registering metrics
	I1124 09:30:11.073961       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [9184b94b256251b919191181e6796e324ba18bae4cbf3a0f2119b9a42fec5ca3] <==
	I1124 09:30:09.873507       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 09:30:09.873562       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 09:30:09.873876       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 09:30:09.874215       1 aggregator.go:187] initial CRD sync complete...
	I1124 09:30:09.874261       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 09:30:09.874285       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:30:09.874308       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:30:09.874540       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:09.874540       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 09:30:09.875375       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 09:30:09.885277       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1124 09:30:09.887421       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:30:09.893282       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:30:09.901755       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:30:09.902025       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:10.178316       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:30:10.218635       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:30:10.243692       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:30:10.253927       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:30:10.309902       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.187.73"}
	I1124 09:30:10.322226       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.196.36"}
	I1124 09:30:10.776325       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1124 09:30:13.406275       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:30:13.509243       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:30:13.557897       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [30a6e3c64639b370b90b744c74ab8c105c5d29c1da8e1514736b486aa759bfeb] <==
	I1124 09:30:13.016130       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.016211       1 range_allocator.go:177] "Sending events to api server"
	I1124 09:30:13.012917       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012972       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.016501       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1124 09:30:13.016515       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:30:13.016534       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012898       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.015953       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012942       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012951       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.015365       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012985       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012977       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012960       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012966       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.018953       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1124 09:30:13.019070       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-639420"
	I1124 09:30:13.019158       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1124 09:30:13.025891       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:30:13.049189       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.113803       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.113827       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 09:30:13.113833       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1124 09:30:13.126736       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1] <==
	I1124 09:30:10.272440       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:30:10.336727       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:30:10.437460       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:10.437489       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 09:30:10.437562       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:30:10.460731       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:30:10.460808       1 server_linux.go:136] "Using iptables Proxier"
	I1124 09:30:10.466370       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:30:10.466820       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 09:30:10.466857       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:30:10.468168       1 config.go:200] "Starting service config controller"
	I1124 09:30:10.468198       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:30:10.468271       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:30:10.468298       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:30:10.468371       1 config.go:309] "Starting node config controller"
	I1124 09:30:10.468379       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:30:10.468386       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:30:10.468413       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:30:10.468418       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:30:10.569214       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:30:10.569206       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:30:10.569247       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8e4139d1654f00b2c6ab22c36ff38c33600c3a84dfef0d03739ea6736a42c583] <==
	I1124 09:30:08.836876       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:30:09.813399       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:30:09.813435       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:30:09.813578       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:30:09.813605       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:30:09.837419       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1124 09:30:09.837491       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:30:09.840497       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:30:09.840535       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:30:09.840699       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:30:09.841162       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:30:09.941609       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.893141     660 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.893236     660 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.893268     660 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.894407     660 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.900210     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.900795     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.901178     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.901534     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-639420" containerName="kube-controller-manager"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.926567     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-639420\" already exists" pod="kube-system/kube-apiserver-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.926680     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-639420" containerName="kube-apiserver"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.928698     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-639420\" already exists" pod="kube-system/etcd-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.928798     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-639420" containerName="etcd"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.932653     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-639420\" already exists" pod="kube-system/kube-scheduler-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.932738     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-639420" containerName="kube-scheduler"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.957034     660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.989916     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/732ff47b-0bb4-48c6-bd56-743340884576-lib-modules\") pod \"kube-proxy-p6g59\" (UID: \"732ff47b-0bb4-48c6-bd56-743340884576\") " pod="kube-system/kube-proxy-p6g59"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.990575     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/732ff47b-0bb4-48c6-bd56-743340884576-xtables-lock\") pod \"kube-proxy-p6g59\" (UID: \"732ff47b-0bb4-48c6-bd56-743340884576\") " pod="kube-system/kube-proxy-p6g59"
	Nov 24 09:30:10 newest-cni-639420 kubelet[660]: E1124 09:30:10.907426     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-639420" containerName="etcd"
	Nov 24 09:30:10 newest-cni-639420 kubelet[660]: E1124 09:30:10.908056     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-639420" containerName="kube-scheduler"
	Nov 24 09:30:10 newest-cni-639420 kubelet[660]: E1124 09:30:10.908376     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-639420" containerName="kube-controller-manager"
	Nov 24 09:30:10 newest-cni-639420 kubelet[660]: E1124 09:30:10.908686     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-639420" containerName="kube-apiserver"
	Nov 24 09:30:12 newest-cni-639420 kubelet[660]: E1124 09:30:12.152063     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-639420" containerName="kube-scheduler"
	Nov 24 09:30:13 newest-cni-639420 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:30:13 newest-cni-639420 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:30:13 newest-cni-639420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-639420 -n newest-cni-639420
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-639420 -n newest-cni-639420: exit status 2 (355.433965ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-639420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-nt7fv storage-provisioner dashboard-metrics-scraper-867fb5f87b-dsc28 kubernetes-dashboard-b84665fb8-786np
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-639420 describe pod coredns-7d764666f9-nt7fv storage-provisioner dashboard-metrics-scraper-867fb5f87b-dsc28 kubernetes-dashboard-b84665fb8-786np
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-639420 describe pod coredns-7d764666f9-nt7fv storage-provisioner dashboard-metrics-scraper-867fb5f87b-dsc28 kubernetes-dashboard-b84665fb8-786np: exit status 1 (72.567033ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-nt7fv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-dsc28" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-786np" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-639420 describe pod coredns-7d764666f9-nt7fv storage-provisioner dashboard-metrics-scraper-867fb5f87b-dsc28 kubernetes-dashboard-b84665fb8-786np: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-639420
helpers_test.go:243: (dbg) docker inspect newest-cni-639420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99",
	        "Created": "2025-11-24T09:29:23.35779578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 344974,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:30:00.824883705Z",
	            "FinishedAt": "2025-11-24T09:29:59.921421167Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/hostname",
	        "HostsPath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/hosts",
	        "LogPath": "/var/lib/docker/containers/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99/71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99-json.log",
	        "Name": "/newest-cni-639420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-639420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-639420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "71986ab5f5c3b29d15e064be7202d936abd1699ed5e5399d3b0af66ec9725f99",
	                "LowerDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9eea581c6504c4a5447d4716a1292df66048ba70d97bccea9a6cd9a6f2f49224/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-639420",
	                "Source": "/var/lib/docker/volumes/newest-cni-639420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-639420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-639420",
	                "name.minikube.sigs.k8s.io": "newest-cni-639420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d077bf915ebc54435b3005856e7d9665aebd5a7efe37f077063f6a8633167193",
	            "SandboxKey": "/var/run/docker/netns/d077bf915ebc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-639420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb5ecbfd413335d9913854ce166d0ab6940e67ee6eb0c6e4edd097241e0aa654",
	                    "EndpointID": "fe09ea7bd4f18a64e1b8c2165c423530980a1231ae0c4c44fadb9ba6f0606e81",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "fe:46:ff:76:63:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-639420",
	                        "71986ab5f5c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639420 -n newest-cni-639420
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639420 -n newest-cni-639420: exit status 2 (402.73573ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-639420 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-639420 logs -n 25: (2.129230254s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-767267 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p no-preload-938348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p kubernetes-upgrade-967467                                                                                                                                                                                                                         │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-164377 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p newest-cni-639420 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-639420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ newest-cni-639420 image list --format=json                                                                                                                                                                                                           │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p newest-cni-639420 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ image   │ no-preload-938348 image list --format=json                                                                                                                                                                                                           │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p no-preload-938348 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:30:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:30:14.256245  350567 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:14.256374  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256383  350567 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:14.256387  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256590  350567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:14.257068  350567 out.go:368] Setting JSON to false
	I1124 09:30:14.258256  350567 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4360,"bootTime":1763972254,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:30:14.258310  350567 start.go:143] virtualization: kvm guest
	I1124 09:30:14.260266  350567 out.go:179] * [embed-certs-673346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:30:14.261445  350567 notify.go:221] Checking for updates...
	I1124 09:30:14.261485  350567 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:30:14.262753  350567 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:30:14.264083  350567 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:14.265432  350567 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:30:14.266629  350567 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:30:14.268064  350567 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:30:14.269699  350567 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:14.269849  350567 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.269945  350567 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.270033  350567 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:30:14.295962  350567 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:30:14.296062  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.353929  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.34315637 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.354017  350567 docker.go:319] overlay module found
	I1124 09:30:14.355843  350567 out.go:179] * Using the docker driver based on user configuration
	I1124 09:30:14.357036  350567 start.go:309] selected driver: docker
	I1124 09:30:14.357055  350567 start.go:927] validating driver "docker" against <nil>
	I1124 09:30:14.357071  350567 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:30:14.357913  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.421846  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.410748585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.422058  350567 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:30:14.422268  350567 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:14.423788  350567 out.go:179] * Using Docker driver with root privileges
	I1124 09:30:14.424821  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.424879  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.424889  350567 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:30:14.424949  350567 start.go:353] cluster config:
	{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:14.426196  350567 out.go:179] * Starting "embed-certs-673346" primary control-plane node in "embed-certs-673346" cluster
	I1124 09:30:14.427568  350567 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:30:14.428764  350567 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:30:14.430011  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:14.430039  350567 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:30:14.430057  350567 cache.go:65] Caching tarball of preloaded images
	I1124 09:30:14.430101  350567 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:30:14.430158  350567 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:30:14.430171  350567 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:30:14.430275  350567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:30:14.430300  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json: {Name:mk0422b133bc5e40a804c0d52d08ba9c0b2ed1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.453692  350567 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:30:14.453709  350567 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:30:14.453740  350567 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:30:14.453787  350567 start.go:360] acquireMachinesLock for embed-certs-673346: {Name:mke42f7eda6495a6293833a93353c50b3546b267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:30:14.453896  350567 start.go:364] duration metric: took 91.14µs to acquireMachinesLock for "embed-certs-673346"
	I1124 09:30:14.453926  350567 start.go:93] Provisioning new machine with config: &{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:14.453996  350567 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:30:13.147546  346330 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-164377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:13.167771  346330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:13.172050  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:13.182388  346330 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:13.182659  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.335407  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.491838  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.647119  346330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:13.647243  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.846371  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.028841  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.344499  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.385375  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.385396  346330 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:30:14.385438  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.415659  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.415679  346330 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:14.415687  346330 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1124 09:30:14.415796  346330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-164377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:14.415855  346330 ssh_runner.go:195] Run: crio config
	I1124 09:30:14.467415  346330 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.467440  346330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.467457  346330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:30:14.467485  346330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-164377 NodeName:default-k8s-diff-port-164377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:14.467665  346330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-164377"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:14.467740  346330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:30:14.477297  346330 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:14.477386  346330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:14.486666  346330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 09:30:14.501581  346330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:30:14.516622  346330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 09:30:14.531939  346330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:14.536699  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:14.551687  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:14.653461  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:14.689043  346330 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377 for IP: 192.168.85.2
	I1124 09:30:14.689069  346330 certs.go:195] generating shared ca certs ...
	I1124 09:30:14.689088  346330 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.689257  346330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:14.689322  346330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:14.689350  346330 certs.go:257] generating profile certs ...
	I1124 09:30:14.689449  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/client.key
	I1124 09:30:14.689523  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key.5d8312b5
	I1124 09:30:14.689584  346330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key
	I1124 09:30:14.689713  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:14.689756  346330 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:14.689770  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:14.689805  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:14.689846  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:14.689877  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:14.689936  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:14.690834  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:14.713491  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:14.733133  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:14.755304  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:14.781644  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 09:30:14.807149  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:30:14.826555  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:14.849289  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:30:14.868866  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:14.900899  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:14.927265  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:14.951934  346330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:14.968305  346330 ssh_runner.go:195] Run: openssl version
	I1124 09:30:14.977188  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:14.988887  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993793  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993849  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:15.044783  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:15.062885  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:15.073450  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078558  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078611  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.125021  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:15.134555  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:15.145840  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150712  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150766  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.193031  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:15.203009  346330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:15.208170  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:30:15.268668  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:30:15.330529  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:30:15.386730  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:30:15.450702  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:30:15.510222  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:30:15.573346  346330 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:15.573548  346330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:15.573633  346330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:15.617052  346330 cri.go:89] found id: "dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8"
	I1124 09:30:15.617070  346330 cri.go:89] found id: "892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99"
	I1124 09:30:15.617076  346330 cri.go:89] found id: "4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe"
	I1124 09:30:15.617088  346330 cri.go:89] found id: "4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7"
	I1124 09:30:15.617092  346330 cri.go:89] found id: ""
	I1124 09:30:15.617135  346330 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:30:15.636984  346330 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:15Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:15.638440  346330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:15.649204  346330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:30:15.649226  346330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:30:15.649270  346330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:30:15.663887  346330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:30:15.664735  346330 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-164377" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.665194  346330 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-164377" cluster setting kubeconfig missing "default-k8s-diff-port-164377" context setting]
	I1124 09:30:15.666227  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.668691  346330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:30:15.680140  346330 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 09:30:15.680180  346330 kubeadm.go:602] duration metric: took 30.938163ms to restartPrimaryControlPlane
	I1124 09:30:15.680189  346330 kubeadm.go:403] duration metric: took 106.868907ms to StartCluster
	I1124 09:30:15.680202  346330 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.680258  346330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.681803  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.682046  346330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:15.682240  346330 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:15.682422  346330 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682447  346330 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682456  346330 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:30:15.682523  346330 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682554  346330 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682573  346330 addons.go:248] addon dashboard should already be in state true
	I1124 09:30:15.682612  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.682679  346330 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:15.682721  346330 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682735  346330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-164377"
	I1124 09:30:15.683004  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683176  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683179  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.683615  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683830  346330 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:15.685123  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:15.719127  346330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:30:15.719844  346330 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.719950  346330 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:30:15.720006  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.720557  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.721200  346330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:30:15.721225  346330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:30:15.722276  346330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.722291  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:30:15.722368  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.722497  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:30:15.722505  346330 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:30:15.722550  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.760598  346330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:15.760694  346330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:30:15.760791  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.761102  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.768663  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.809271  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.913227  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:15.931974  346330 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:30:15.958496  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:30:15.958523  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:30:15.961696  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.982191  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:30:15.982217  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:30:15.984451  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:16.003515  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:30:16.003603  346330 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:30:16.025926  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:30:16.025949  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:30:16.049115  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:30:16.049141  346330 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:30:16.070292  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:30:16.070316  346330 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:30:16.087883  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:30:16.087909  346330 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:30:16.107837  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:30:16.107859  346330 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:30:16.130726  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:30:16.130811  346330 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:30:16.152225  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.160631282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.164314057Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=699acb5f-dff2-43e9-a997-b13efac62e40 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.165088522Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d3f40340-50dc-4e6d-ab77-90ac165d2111 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.166198583Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.166852746Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.16710998Z" level=info msg="Ran pod sandbox 1425a8902f53bc4b08507ca73829d16a4496db69ab46d0e3183460679d2061ae with infra container: kube-system/kindnet-ttw2l/POD" id=699acb5f-dff2-43e9-a997-b13efac62e40 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.167664003Z" level=info msg="Ran pod sandbox 44a65ae6ed7dc3c271b8dd326740d870425bceeb487ffcad6c77eb034f5b6bae with infra container: kube-system/kube-proxy-p6g59/POD" id=d3f40340-50dc-4e6d-ab77-90ac165d2111 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.168466881Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3c478eb5-b141-4a8f-800c-283f3d847102 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.16912563Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=46213bb4-c847-476b-aa25-6820fda501e9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.169566275Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6c26f13b-648a-44d5-9078-55e9303616d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.170205282Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4ad22590-293a-4b67-b72b-9cdf6b4c7aaf name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.170718639Z" level=info msg="Creating container: kube-system/kindnet-ttw2l/kindnet-cni" id=d97cbcd6-b850-4daf-a73a-ab2b9cb316e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.170914435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.172277943Z" level=info msg="Creating container: kube-system/kube-proxy-p6g59/kube-proxy" id=aa5bec21-57df-4811-947c-b69c8b159322 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.172828248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.177853993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.178751982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.182838216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.183504495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.213981428Z" level=info msg="Created container 3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454: kube-system/kindnet-ttw2l/kindnet-cni" id=d97cbcd6-b850-4daf-a73a-ab2b9cb316e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.214734526Z" level=info msg="Starting container: 3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454" id=465f4b91-d0a1-46c5-87b7-3681e32b8c13 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.216527761Z" level=info msg="Started container" PID=1029 containerID=3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454 description=kube-system/kindnet-ttw2l/kindnet-cni id=465f4b91-d0a1-46c5-87b7-3681e32b8c13 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1425a8902f53bc4b08507ca73829d16a4496db69ab46d0e3183460679d2061ae
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.217516414Z" level=info msg="Created container 4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1: kube-system/kube-proxy-p6g59/kube-proxy" id=aa5bec21-57df-4811-947c-b69c8b159322 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.218230606Z" level=info msg="Starting container: 4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1" id=ca2422a3-6ccf-41f0-920b-14d81412195f name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:30:10 newest-cni-639420 crio[521]: time="2025-11-24T09:30:10.221391995Z" level=info msg="Started container" PID=1030 containerID=4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1 description=kube-system/kube-proxy-p6g59/kube-proxy id=ca2422a3-6ccf-41f0-920b-14d81412195f name=/runtime.v1.RuntimeService/StartContainer sandboxID=44a65ae6ed7dc3c271b8dd326740d870425bceeb487ffcad6c77eb034f5b6bae
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4444f7831eb2b       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   9 seconds ago       Running             kube-proxy                1                   44a65ae6ed7dc       kube-proxy-p6g59                            kube-system
	3e874950816f8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 seconds ago       Running             kindnet-cni               1                   1425a8902f53b       kindnet-ttw2l                               kube-system
	88c7ce3d6164f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   10 seconds ago      Running             etcd                      1                   eac68c86b88f6       etcd-newest-cni-639420                      kube-system
	8e4139d1654f0       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   10 seconds ago      Running             kube-scheduler            1                   77cd9978ba782       kube-scheduler-newest-cni-639420            kube-system
	30a6e3c64639b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   10 seconds ago      Running             kube-controller-manager   1                   239335ad4196b       kube-controller-manager-newest-cni-639420   kube-system
	9184b94b25625       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   10 seconds ago      Running             kube-apiserver            1                   d3a6f21919e94       kube-apiserver-newest-cni-639420            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-639420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-639420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=newest-cni-639420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_29_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:29:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-639420
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:30:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:30:09 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:30:09 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:30:09 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 09:30:09 +0000   Mon, 24 Nov 2025 09:29:45 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-639420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c98ff8f9-f47f-426e-a902-762092513ece
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-639420                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-ttw2l                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-639420             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-639420    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-p6g59                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-639420             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node newest-cni-639420 event: Registered Node newest-cni-639420 in Controller
	  Normal  RegisteredNode  6s    node-controller  Node newest-cni-639420 event: Registered Node newest-cni-639420 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [88c7ce3d6164f5dfd7c2cc0943164705cb7b0ebb9f192f92ef4d886c82cf2a0e] <==
	{"level":"warn","ts":"2025-11-24T09:30:09.127377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.143963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.153701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.161912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.170084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.178584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.187031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.194114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.201573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.216512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.223671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.230409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.239039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.246510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.253605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.261308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.268420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.275102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.282923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.301569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.305185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.312692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.319955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.326718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:09.390375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56612","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:30:19 up  1:12,  0 user,  load average: 5.33, 3.67, 2.33
	Linux newest-cni-639420 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3e874950816f8f9380996f25892703e1e6ee6a8b0df234a459afa5ef4635d454] <==
	I1124 09:30:10.469917       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:30:10.472570       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 09:30:10.472883       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:30:10.472972       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:30:10.473023       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:30:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:30:10.673037       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:30:10.673597       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:30:10.673635       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:30:10.674001       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:30:11.073865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:30:11.073903       1 metrics.go:72] Registering metrics
	I1124 09:30:11.073961       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [9184b94b256251b919191181e6796e324ba18bae4cbf3a0f2119b9a42fec5ca3] <==
	I1124 09:30:09.873507       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 09:30:09.873562       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 09:30:09.873876       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 09:30:09.874215       1 aggregator.go:187] initial CRD sync complete...
	I1124 09:30:09.874261       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 09:30:09.874285       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:30:09.874308       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:30:09.874540       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:09.874540       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 09:30:09.875375       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 09:30:09.885277       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1124 09:30:09.887421       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:30:09.893282       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:30:09.901755       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:30:09.902025       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:10.178316       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:30:10.218635       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:30:10.243692       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:30:10.253927       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:30:10.309902       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.187.73"}
	I1124 09:30:10.322226       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.196.36"}
	I1124 09:30:10.776325       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1124 09:30:13.406275       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:30:13.509243       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:30:13.557897       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [30a6e3c64639b370b90b744c74ab8c105c5d29c1da8e1514736b486aa759bfeb] <==
	I1124 09:30:13.016130       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.016211       1 range_allocator.go:177] "Sending events to api server"
	I1124 09:30:13.012917       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012972       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.016501       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1124 09:30:13.016515       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:30:13.016534       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012898       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.015953       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012942       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012951       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.015365       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012985       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012977       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012960       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.012966       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.018953       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1124 09:30:13.019070       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-639420"
	I1124 09:30:13.019158       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1124 09:30:13.025891       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:30:13.049189       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.113803       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:13.113827       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 09:30:13.113833       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1124 09:30:13.126736       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [4444f7831eb2b1d03eb83b0ca6be8809af107846fd18d2e352b5055228363ab1] <==
	I1124 09:30:10.272440       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:30:10.336727       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:30:10.437460       1 shared_informer.go:377] "Caches are synced"
	I1124 09:30:10.437489       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 09:30:10.437562       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:30:10.460731       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:30:10.460808       1 server_linux.go:136] "Using iptables Proxier"
	I1124 09:30:10.466370       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:30:10.466820       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 09:30:10.466857       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:30:10.468168       1 config.go:200] "Starting service config controller"
	I1124 09:30:10.468198       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:30:10.468271       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:30:10.468298       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:30:10.468371       1 config.go:309] "Starting node config controller"
	I1124 09:30:10.468379       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:30:10.468386       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:30:10.468413       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:30:10.468418       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:30:10.569214       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:30:10.569206       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:30:10.569247       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8e4139d1654f00b2c6ab22c36ff38c33600c3a84dfef0d03739ea6736a42c583] <==
	I1124 09:30:08.836876       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:30:09.813399       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:30:09.813435       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:30:09.813578       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:30:09.813605       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:30:09.837419       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1124 09:30:09.837491       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:30:09.840497       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:30:09.840535       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:30:09.840699       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:30:09.841162       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:30:09.941609       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.893141     660 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.893236     660 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.893268     660 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.894407     660 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.900210     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.900795     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.901178     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.901534     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-639420" containerName="kube-controller-manager"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.926567     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-639420\" already exists" pod="kube-system/kube-apiserver-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.926680     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-639420" containerName="kube-apiserver"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.928698     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-639420\" already exists" pod="kube-system/etcd-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.928798     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-639420" containerName="etcd"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.932653     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-639420\" already exists" pod="kube-system/kube-scheduler-newest-cni-639420"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: E1124 09:30:09.932738     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-639420" containerName="kube-scheduler"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.957034     660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.989916     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/732ff47b-0bb4-48c6-bd56-743340884576-lib-modules\") pod \"kube-proxy-p6g59\" (UID: \"732ff47b-0bb4-48c6-bd56-743340884576\") " pod="kube-system/kube-proxy-p6g59"
	Nov 24 09:30:09 newest-cni-639420 kubelet[660]: I1124 09:30:09.990575     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/732ff47b-0bb4-48c6-bd56-743340884576-xtables-lock\") pod \"kube-proxy-p6g59\" (UID: \"732ff47b-0bb4-48c6-bd56-743340884576\") " pod="kube-system/kube-proxy-p6g59"
	Nov 24 09:30:10 newest-cni-639420 kubelet[660]: E1124 09:30:10.907426     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-639420" containerName="etcd"
	Nov 24 09:30:10 newest-cni-639420 kubelet[660]: E1124 09:30:10.908056     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-639420" containerName="kube-scheduler"
	Nov 24 09:30:10 newest-cni-639420 kubelet[660]: E1124 09:30:10.908376     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-639420" containerName="kube-controller-manager"
	Nov 24 09:30:10 newest-cni-639420 kubelet[660]: E1124 09:30:10.908686     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-639420" containerName="kube-apiserver"
	Nov 24 09:30:12 newest-cni-639420 kubelet[660]: E1124 09:30:12.152063     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-639420" containerName="kube-scheduler"
	Nov 24 09:30:13 newest-cni-639420 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:30:13 newest-cni-639420 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:30:13 newest-cni-639420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-639420 -n newest-cni-639420
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-639420 -n newest-cni-639420: exit status 2 (438.335846ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-639420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-nt7fv storage-provisioner dashboard-metrics-scraper-867fb5f87b-dsc28 kubernetes-dashboard-b84665fb8-786np
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-639420 describe pod coredns-7d764666f9-nt7fv storage-provisioner dashboard-metrics-scraper-867fb5f87b-dsc28 kubernetes-dashboard-b84665fb8-786np
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-639420 describe pod coredns-7d764666f9-nt7fv storage-provisioner dashboard-metrics-scraper-867fb5f87b-dsc28 kubernetes-dashboard-b84665fb8-786np: exit status 1 (70.594507ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-nt7fv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-dsc28" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-786np" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-639420 describe pod coredns-7d764666f9-nt7fv storage-provisioner dashboard-metrics-scraper-867fb5f87b-dsc28 kubernetes-dashboard-b84665fb8-786np: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (8.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-938348 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-938348 --alsologtostderr -v=1: exit status 80 (2.699145099s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-938348 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:30:15.484850  351469 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:15.485680  351469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:15.485689  351469 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:15.485696  351469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:15.486049  351469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:15.486404  351469 out.go:368] Setting JSON to false
	I1124 09:30:15.486419  351469 mustload.go:66] Loading cluster: no-preload-938348
	I1124 09:30:15.487020  351469 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:15.487607  351469 cli_runner.go:164] Run: docker container inspect no-preload-938348 --format={{.State.Status}}
	I1124 09:30:15.519238  351469 host.go:66] Checking if "no-preload-938348" exists ...
	I1124 09:30:15.519808  351469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:15.611878  351469 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 09:30:15.598748987 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:15.613063  351469 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-938348 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 09:30:15.616045  351469 out.go:179] * Pausing node no-preload-938348 ... 
	I1124 09:30:15.617432  351469 host.go:66] Checking if "no-preload-938348" exists ...
	I1124 09:30:15.617770  351469 ssh_runner.go:195] Run: systemctl --version
	I1124 09:30:15.617812  351469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-938348
	I1124 09:30:15.642040  351469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/no-preload-938348/id_rsa Username:docker}
	I1124 09:30:15.769041  351469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:15.790646  351469 pause.go:52] kubelet running: true
	I1124 09:30:15.790840  351469 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:30:16.089855  351469 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:30:16.090131  351469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:30:16.186901  351469 cri.go:89] found id: "9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110"
	I1124 09:30:16.186927  351469 cri.go:89] found id: "81f0e973a9ec09fc3a5946da26b648d86558c8a265baebb6184fd3c1df1a5ddf"
	I1124 09:30:16.186931  351469 cri.go:89] found id: "0d54153cfa2374cf3ebf9c6ea76683457f7cb271d29888cbab4b9e2c932c6fc1"
	I1124 09:30:16.186934  351469 cri.go:89] found id: "db9a9aba0693b189be45a91992e8cb1a931ecc959206bcf32d3ea78e9ee78cab"
	I1124 09:30:16.186937  351469 cri.go:89] found id: "7d63bdcbadf22ee1f12d149e06cf86eec9b2d1bd764b9d32e156aaa6df690dfe"
	I1124 09:30:16.186941  351469 cri.go:89] found id: "3ad41a5ac915a2420a94ca88b9c3279566a6e896889754dc508c89ee3c9211e9"
	I1124 09:30:16.186943  351469 cri.go:89] found id: "36d1fad8848862ea43c7b05032173e3e3b7f0933dc08295c02778fb4b025a652"
	I1124 09:30:16.186946  351469 cri.go:89] found id: "a9fb0f7c0718dd8bc54d167231997e0c85b183e6aa45ef9d18e4350114c5d548"
	I1124 09:30:16.186948  351469 cri.go:89] found id: "bfa3e672acac8938f9e806c8ee2b3dfe80d66448b24724b3bbf29f8c10551751"
	I1124 09:30:16.186954  351469 cri.go:89] found id: "dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9"
	I1124 09:30:16.186957  351469 cri.go:89] found id: "e11a1702d22ad7eb2911cd8bf1695d4428ffe0b597b74c43c44c0dd577f07792"
	I1124 09:30:16.186959  351469 cri.go:89] found id: ""
	I1124 09:30:16.186995  351469 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:30:16.203390  351469 retry.go:31] will retry after 374.323557ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:16Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:16.580807  351469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:16.609414  351469 pause.go:52] kubelet running: false
	I1124 09:30:16.609498  351469 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:30:16.831768  351469 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:30:16.831862  351469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:30:16.924607  351469 cri.go:89] found id: "9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110"
	I1124 09:30:16.924637  351469 cri.go:89] found id: "81f0e973a9ec09fc3a5946da26b648d86558c8a265baebb6184fd3c1df1a5ddf"
	I1124 09:30:16.924645  351469 cri.go:89] found id: "0d54153cfa2374cf3ebf9c6ea76683457f7cb271d29888cbab4b9e2c932c6fc1"
	I1124 09:30:16.924650  351469 cri.go:89] found id: "db9a9aba0693b189be45a91992e8cb1a931ecc959206bcf32d3ea78e9ee78cab"
	I1124 09:30:16.924656  351469 cri.go:89] found id: "7d63bdcbadf22ee1f12d149e06cf86eec9b2d1bd764b9d32e156aaa6df690dfe"
	I1124 09:30:16.924661  351469 cri.go:89] found id: "3ad41a5ac915a2420a94ca88b9c3279566a6e896889754dc508c89ee3c9211e9"
	I1124 09:30:16.924666  351469 cri.go:89] found id: "36d1fad8848862ea43c7b05032173e3e3b7f0933dc08295c02778fb4b025a652"
	I1124 09:30:16.924671  351469 cri.go:89] found id: "a9fb0f7c0718dd8bc54d167231997e0c85b183e6aa45ef9d18e4350114c5d548"
	I1124 09:30:16.924676  351469 cri.go:89] found id: "bfa3e672acac8938f9e806c8ee2b3dfe80d66448b24724b3bbf29f8c10551751"
	I1124 09:30:16.924693  351469 cri.go:89] found id: "dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9"
	I1124 09:30:16.924697  351469 cri.go:89] found id: "e11a1702d22ad7eb2911cd8bf1695d4428ffe0b597b74c43c44c0dd577f07792"
	I1124 09:30:16.924700  351469 cri.go:89] found id: ""
	I1124 09:30:16.924745  351469 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:30:16.945255  351469 retry.go:31] will retry after 500.595005ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:16Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:17.446973  351469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:17.463171  351469 pause.go:52] kubelet running: false
	I1124 09:30:17.463237  351469 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:30:17.649530  351469 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:30:17.649617  351469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:30:17.747062  351469 cri.go:89] found id: "9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110"
	I1124 09:30:17.747087  351469 cri.go:89] found id: "81f0e973a9ec09fc3a5946da26b648d86558c8a265baebb6184fd3c1df1a5ddf"
	I1124 09:30:17.747094  351469 cri.go:89] found id: "0d54153cfa2374cf3ebf9c6ea76683457f7cb271d29888cbab4b9e2c932c6fc1"
	I1124 09:30:17.747098  351469 cri.go:89] found id: "db9a9aba0693b189be45a91992e8cb1a931ecc959206bcf32d3ea78e9ee78cab"
	I1124 09:30:17.747103  351469 cri.go:89] found id: "7d63bdcbadf22ee1f12d149e06cf86eec9b2d1bd764b9d32e156aaa6df690dfe"
	I1124 09:30:17.747108  351469 cri.go:89] found id: "3ad41a5ac915a2420a94ca88b9c3279566a6e896889754dc508c89ee3c9211e9"
	I1124 09:30:17.747112  351469 cri.go:89] found id: "36d1fad8848862ea43c7b05032173e3e3b7f0933dc08295c02778fb4b025a652"
	I1124 09:30:17.747116  351469 cri.go:89] found id: "a9fb0f7c0718dd8bc54d167231997e0c85b183e6aa45ef9d18e4350114c5d548"
	I1124 09:30:17.747130  351469 cri.go:89] found id: "bfa3e672acac8938f9e806c8ee2b3dfe80d66448b24724b3bbf29f8c10551751"
	I1124 09:30:17.747139  351469 cri.go:89] found id: "dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9"
	I1124 09:30:17.747153  351469 cri.go:89] found id: "e11a1702d22ad7eb2911cd8bf1695d4428ffe0b597b74c43c44c0dd577f07792"
	I1124 09:30:17.747157  351469 cri.go:89] found id: ""
	I1124 09:30:17.747199  351469 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:30:17.942777  351469 out.go:203] 
	W1124 09:30:18.002015  351469 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:30:18.002046  351469 out.go:285] * 
	* 
	W1124 09:30:18.007550  351469 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:30:18.044192  351469 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-938348 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-938348
helpers_test.go:243: (dbg) docker inspect no-preload-938348:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761",
	        "Created": "2025-11-24T09:28:01.464607298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333804,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:29:17.27545814Z",
	            "FinishedAt": "2025-11-24T09:29:15.974768209Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/hosts",
	        "LogPath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761-json.log",
	        "Name": "/no-preload-938348",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-938348:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-938348",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761",
	                "LowerDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-938348",
	                "Source": "/var/lib/docker/volumes/no-preload-938348/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-938348",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-938348",
	                "name.minikube.sigs.k8s.io": "no-preload-938348",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4c94100bc69470817b1f2be0d46cb001b149f198cb2bd347aa463023348db86c",
	            "SandboxKey": "/var/run/docker/netns/4c94100bc694",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-938348": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3f03f3b5e2bfb0cd68097788ad47d94eb14c12cf815ca0f14753094201a5fac2",
	                    "EndpointID": "44487318792c4f8d1f6132748d3c811aa814aeca9ddd2c393ee790bb5cb45f14",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "52:51:36:52:ba:36",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-938348",
	                        "c1c5f9bb92d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-938348 -n no-preload-938348
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-938348 -n no-preload-938348: exit status 2 (359.567719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-938348 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-938348 logs -n 25: (2.093495119s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-767267 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p no-preload-938348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p kubernetes-upgrade-967467                                                                                                                                                                                                                         │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-164377 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p newest-cni-639420 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-639420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ newest-cni-639420 image list --format=json                                                                                                                                                                                                           │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p newest-cni-639420 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ image   │ no-preload-938348 image list --format=json                                                                                                                                                                                                           │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p no-preload-938348 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:30:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:30:14.256245  350567 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:14.256374  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256383  350567 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:14.256387  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256590  350567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:14.257068  350567 out.go:368] Setting JSON to false
	I1124 09:30:14.258256  350567 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4360,"bootTime":1763972254,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:30:14.258310  350567 start.go:143] virtualization: kvm guest
	I1124 09:30:14.260266  350567 out.go:179] * [embed-certs-673346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:30:14.261445  350567 notify.go:221] Checking for updates...
	I1124 09:30:14.261485  350567 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:30:14.262753  350567 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:30:14.264083  350567 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:14.265432  350567 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:30:14.266629  350567 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:30:14.268064  350567 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:30:14.269699  350567 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:14.269849  350567 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.269945  350567 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.270033  350567 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:30:14.295962  350567 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:30:14.296062  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.353929  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.34315637 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.354017  350567 docker.go:319] overlay module found
	I1124 09:30:14.355843  350567 out.go:179] * Using the docker driver based on user configuration
	I1124 09:30:14.357036  350567 start.go:309] selected driver: docker
	I1124 09:30:14.357055  350567 start.go:927] validating driver "docker" against <nil>
	I1124 09:30:14.357071  350567 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:30:14.357913  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.421846  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.410748585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.422058  350567 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:30:14.422268  350567 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:14.423788  350567 out.go:179] * Using Docker driver with root privileges
	I1124 09:30:14.424821  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.424879  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.424889  350567 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:30:14.424949  350567 start.go:353] cluster config:
	{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:14.426196  350567 out.go:179] * Starting "embed-certs-673346" primary control-plane node in "embed-certs-673346" cluster
	I1124 09:30:14.427568  350567 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:30:14.428764  350567 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:30:14.430011  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:14.430039  350567 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:30:14.430057  350567 cache.go:65] Caching tarball of preloaded images
	I1124 09:30:14.430101  350567 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:30:14.430158  350567 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:30:14.430171  350567 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:30:14.430275  350567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:30:14.430300  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json: {Name:mk0422b133bc5e40a804c0d52d08ba9c0b2ed1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.453692  350567 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:30:14.453709  350567 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:30:14.453740  350567 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:30:14.453787  350567 start.go:360] acquireMachinesLock for embed-certs-673346: {Name:mke42f7eda6495a6293833a93353c50b3546b267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:30:14.453896  350567 start.go:364] duration metric: took 91.14µs to acquireMachinesLock for "embed-certs-673346"
	I1124 09:30:14.453926  350567 start.go:93] Provisioning new machine with config: &{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:14.453996  350567 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:30:13.147546  346330 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-164377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:13.167771  346330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:13.172050  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:13.182388  346330 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:13.182659  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.335407  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.491838  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.647119  346330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:13.647243  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.846371  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.028841  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.344499  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.385375  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.385396  346330 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:30:14.385438  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.415659  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.415679  346330 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:14.415687  346330 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1124 09:30:14.415796  346330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-164377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:14.415855  346330 ssh_runner.go:195] Run: crio config
	I1124 09:30:14.467415  346330 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.467440  346330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.467457  346330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:30:14.467485  346330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-164377 NodeName:default-k8s-diff-port-164377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:14.467665  346330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-164377"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:14.467740  346330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:30:14.477297  346330 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:14.477386  346330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:14.486666  346330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 09:30:14.501581  346330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:30:14.516622  346330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 09:30:14.531939  346330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:14.536699  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:14.551687  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:14.653461  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:14.689043  346330 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377 for IP: 192.168.85.2
	I1124 09:30:14.689069  346330 certs.go:195] generating shared ca certs ...
	I1124 09:30:14.689088  346330 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.689257  346330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:14.689322  346330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:14.689350  346330 certs.go:257] generating profile certs ...
	I1124 09:30:14.689449  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/client.key
	I1124 09:30:14.689523  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key.5d8312b5
	I1124 09:30:14.689584  346330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key
	I1124 09:30:14.689713  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:14.689756  346330 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:14.689770  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:14.689805  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:14.689846  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:14.689877  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:14.689936  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:14.690834  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:14.713491  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:14.733133  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:14.755304  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:14.781644  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 09:30:14.807149  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:30:14.826555  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:14.849289  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:30:14.868866  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:14.900899  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:14.927265  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:14.951934  346330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:14.968305  346330 ssh_runner.go:195] Run: openssl version
	I1124 09:30:14.977188  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:14.988887  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993793  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993849  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:15.044783  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:15.062885  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:15.073450  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078558  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078611  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.125021  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:15.134555  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:15.145840  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150712  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150766  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.193031  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:15.203009  346330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:15.208170  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:30:15.268668  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:30:15.330529  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:30:15.386730  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:30:15.450702  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:30:15.510222  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:30:15.573346  346330 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:15.573548  346330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:15.573633  346330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:15.617052  346330 cri.go:89] found id: "dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8"
	I1124 09:30:15.617070  346330 cri.go:89] found id: "892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99"
	I1124 09:30:15.617076  346330 cri.go:89] found id: "4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe"
	I1124 09:30:15.617088  346330 cri.go:89] found id: "4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7"
	I1124 09:30:15.617092  346330 cri.go:89] found id: ""
	I1124 09:30:15.617135  346330 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:30:15.636984  346330 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:15Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:15.638440  346330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:15.649204  346330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:30:15.649226  346330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:30:15.649270  346330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:30:15.663887  346330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:30:15.664735  346330 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-164377" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.665194  346330 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-164377" cluster setting kubeconfig missing "default-k8s-diff-port-164377" context setting]
	I1124 09:30:15.666227  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.668691  346330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:30:15.680140  346330 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 09:30:15.680180  346330 kubeadm.go:602] duration metric: took 30.938163ms to restartPrimaryControlPlane
	I1124 09:30:15.680189  346330 kubeadm.go:403] duration metric: took 106.868907ms to StartCluster
	I1124 09:30:15.680202  346330 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.680258  346330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.681803  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.682046  346330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:15.682240  346330 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:15.682422  346330 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682447  346330 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682456  346330 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:30:15.682523  346330 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682554  346330 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682573  346330 addons.go:248] addon dashboard should already be in state true
	I1124 09:30:15.682612  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.682679  346330 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:15.682721  346330 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682735  346330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-164377"
	I1124 09:30:15.683004  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683176  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683179  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.683615  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683830  346330 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:15.685123  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:15.719127  346330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:30:15.719844  346330 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.719950  346330 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:30:15.720006  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.720557  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.721200  346330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:30:15.721225  346330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:30:15.722276  346330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.722291  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:30:15.722368  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.722497  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:30:15.722505  346330 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:30:15.722550  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.760598  346330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:15.760694  346330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:30:15.760791  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.761102  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.768663  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.809271  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.913227  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:15.931974  346330 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:30:15.958496  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:30:15.958523  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:30:15.961696  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.982191  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:30:15.982217  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:30:15.984451  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:16.003515  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:30:16.003603  346330 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:30:16.025926  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:30:16.025949  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:30:16.049115  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:30:16.049141  346330 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:30:16.070292  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:30:16.070316  346330 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:30:16.087883  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:30:16.087909  346330 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:30:16.107837  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:30:16.107859  346330 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:30:16.130726  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:30:16.130811  346330 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:30:16.152225  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 24 09:29:39 no-preload-938348 crio[558]: time="2025-11-24T09:29:39.033204838Z" level=info msg="Started container" PID=1715 containerID=95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper id=f9b4a1ec-2e15-4838-9ee2-4eb1d75b43d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed52dcb6e98027692222c21662bb8b91f3553eb34467636c28eabf3bbf859c1f
	Nov 24 09:29:39 no-preload-938348 crio[558]: time="2025-11-24T09:29:39.993682163Z" level=info msg="Removing container: e2ae6af8996f0d611a1b9d799a16b76c535d7e56719f3aecfdbf41ad8923add5" id=adc8e267-f0af-4a9b-b29c-cedf3027fa9b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:40 no-preload-938348 crio[558]: time="2025-11-24T09:29:40.004412562Z" level=info msg="Removed container e2ae6af8996f0d611a1b9d799a16b76c535d7e56719f3aecfdbf41ad8923add5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper" id=adc8e267-f0af-4a9b-b29c-cedf3027fa9b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.912421649Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=606b44e5-6297-4c4d-88f5-ca1b39867c67 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.914856235Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e81eff4b-26aa-4652-a30f-ce63ebd8b8db name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.917678619Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper" id=d899827b-cc3c-4878-813f-22d219aa8613 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.917810305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.925074462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.925537592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.957813423Z" level=info msg="Created container dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper" id=d899827b-cc3c-4878-813f-22d219aa8613 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.958485796Z" level=info msg="Starting container: dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9" id=895e23bc-5f64-4125-b0cf-e3d2996bc836 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.96046376Z" level=info msg="Started container" PID=1725 containerID=dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper id=895e23bc-5f64-4125-b0cf-e3d2996bc836 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed52dcb6e98027692222c21662bb8b91f3553eb34467636c28eabf3bbf859c1f
	Nov 24 09:29:52 no-preload-938348 crio[558]: time="2025-11-24T09:29:52.027777362Z" level=info msg="Removing container: 95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301" id=74b51c45-f593-4417-86a4-c0673e015964 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:52 no-preload-938348 crio[558]: time="2025-11-24T09:29:52.038707991Z" level=info msg="Removed container 95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper" id=74b51c45-f593-4417-86a4-c0673e015964 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.048930093Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c522083e-8b5f-417d-b604-3735e8a7f46f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.049996058Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=14ab401e-6837-4d35-afcb-713564998855 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.05116027Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=481ea68f-be25-4591-a653-03c837c42acd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.051359016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.056883965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.057026073Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f7024a85c01d479aa89999624ed4f9001020362b1cc9c2140dab3d01177b98dd/merged/etc/passwd: no such file or directory"
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.057058261Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f7024a85c01d479aa89999624ed4f9001020362b1cc9c2140dab3d01177b98dd/merged/etc/group: no such file or directory"
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.057357345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.075656249Z" level=info msg="Created container 9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110: kube-system/storage-provisioner/storage-provisioner" id=481ea68f-be25-4591-a653-03c837c42acd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.076255902Z" level=info msg="Starting container: 9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110" id=d99d5924-1d27-45ba-92fb-d48c503b91de name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.078389177Z" level=info msg="Started container" PID=1739 containerID=9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110 description=kube-system/storage-provisioner/storage-provisioner id=d99d5924-1d27-45ba-92fb-d48c503b91de name=/runtime.v1.RuntimeService/StartContainer sandboxID=3567b00139c3d5644bcfecf2c5aa8f48ca08abc46999ae7e5d154f7fec55486a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9b04daab41016       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   3567b00139c3d       storage-provisioner                          kube-system
	dc1ddeaa7bc8b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   ed52dcb6e9802       dashboard-metrics-scraper-867fb5f87b-zctmn   kubernetes-dashboard
	e11a1702d22ad       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   448dc164e1c53       kubernetes-dashboard-b84665fb8-stj4z         kubernetes-dashboard
	81f0e973a9ec0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   3567b00139c3d       storage-provisioner                          kube-system
	0d54153cfa237       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   3dace2cd67898       kindnet-zrnnf                                kube-system
	f5652149b9a3f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   49bc113326df3       busybox                                      default
	db9a9aba0693b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           51 seconds ago      Running             coredns                     0                   9a712dbcc40cd       coredns-7d764666f9-ll2c4                     kube-system
	7d63bdcbadf22       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           51 seconds ago      Running             kube-proxy                  0                   0c3d323eeb47f       kube-proxy-smqgp                             kube-system
	3ad41a5ac915a       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           54 seconds ago      Running             kube-scheduler              0                   2d80122a6949a       kube-scheduler-no-preload-938348             kube-system
	36d1fad884886       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   d5e3549eb141f       etcd-no-preload-938348                       kube-system
	a9fb0f7c0718d       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           54 seconds ago      Running             kube-controller-manager     0                   0a34a5cb54cdf       kube-controller-manager-no-preload-938348    kube-system
	bfa3e672acac8       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           54 seconds ago      Running             kube-apiserver              0                   5e2f3ad9b5c4d       kube-apiserver-no-preload-938348             kube-system
	
	
	==> coredns [db9a9aba0693b189be45a91992e8cb1a931ecc959206bcf32d3ea78e9ee78cab] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56180 - 23550 "HINFO IN 7823894593425671882.5183944969165709600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054788306s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-938348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-938348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=no-preload-938348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_28_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:28:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-938348
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:30:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:29:58 +0000   Mon, 24 Nov 2025 09:28:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:29:58 +0000   Mon, 24 Nov 2025 09:28:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:29:58 +0000   Mon, 24 Nov 2025 09:28:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:29:58 +0000   Mon, 24 Nov 2025 09:28:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-938348
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                5a3d5c1e-1c49-4ac3-aca7-a3f8db3c500c
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-ll2c4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-938348                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-zrnnf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-938348              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-938348     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-smqgp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-938348              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-zctmn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-stj4z          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-938348 event: Registered Node no-preload-938348 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node no-preload-938348 event: Registered Node no-preload-938348 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [36d1fad8848862ea43c7b05032173e3e3b7f0933dc08295c02778fb4b025a652] <==
	{"level":"warn","ts":"2025-11-24T09:29:26.796430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.804910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.812754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.819758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.826823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.836766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.844606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.852278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.859140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.865874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.872676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.879859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.886858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.893241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.900256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.908202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.915029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.922049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.935087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.941655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.948991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.956255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:27.008835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:19.000064Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.101484ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766384287734852 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.94.2\" mod_revision:636 > success:<request_put:<key:\"/registry/masterleases/192.168.94.2\" value_size:65 lease:6571766384287734850 >> failure:<request_range:<key:\"/registry/masterleases/192.168.94.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:30:19.000200Z","caller":"traceutil/trace.go:172","msg":"trace[616356095] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"259.970818ms","start":"2025-11-24T09:30:18.740212Z","end":"2025-11-24T09:30:19.000182Z","steps":["trace[616356095] 'process raft request'  (duration: 149.274849ms)","trace[616356095] 'compare'  (duration: 109.995024ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:30:20 up  1:12,  0 user,  load average: 5.33, 3.67, 2.33
	Linux no-preload-938348 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0d54153cfa2374cf3ebf9c6ea76683457f7cb271d29888cbab4b9e2c932c6fc1] <==
	I1124 09:29:28.723938       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:29:28.724213       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 09:29:28.724450       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:29:28.724469       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:29:28.724497       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:29:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:29:28.925299       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:29:28.925366       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:29:28.925388       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:29:28.926259       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:29:29.325810       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:29:29.325854       1 metrics.go:72] Registering metrics
	I1124 09:29:29.325990       1 controller.go:711] "Syncing nftables rules"
	I1124 09:29:38.925417       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:29:38.925536       1 main.go:301] handling current node
	I1124 09:29:48.932006       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:29:48.932046       1 main.go:301] handling current node
	I1124 09:29:58.925462       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:29:58.925499       1 main.go:301] handling current node
	I1124 09:30:08.931489       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:30:08.931544       1 main.go:301] handling current node
	I1124 09:30:18.934445       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:30:18.934477       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bfa3e672acac8938f9e806c8ee2b3dfe80d66448b24724b3bbf29f8c10551751] <==
	I1124 09:29:27.488928       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 09:29:27.489451       1 aggregator.go:187] initial CRD sync complete...
	I1124 09:29:27.489462       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 09:29:27.489467       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:29:27.489474       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:29:27.489760       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 09:29:27.488633       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 09:29:27.490160       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 09:29:27.488939       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 09:29:27.495539       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1124 09:29:27.498784       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:29:27.508594       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:29:27.512726       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:29:27.795320       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:29:27.847999       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:29:27.886868       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:29:27.908844       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:29:27.920188       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:29:27.989202       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.208.117"}
	I1124 09:29:28.013301       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.170.104"}
	I1124 09:29:28.392822       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1124 09:29:31.076435       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:29:31.224031       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:29:31.224038       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:29:31.324259       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a9fb0f7c0718dd8bc54d167231997e0c85b183e6aa45ef9d18e4350114c5d548] <==
	I1124 09:29:30.641465       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.633086       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.639766       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.641741       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.640175       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.640086       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.641857       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.639890       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.642195       1 range_allocator.go:177] "Sending events to api server"
	I1124 09:29:30.642298       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1124 09:29:30.642306       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:29:30.642312       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.638484       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.639381       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.639268       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.640019       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.640281       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.641791       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.633065       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.633076       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.647600       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.732426       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.735740       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.735760       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 09:29:30.735768       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [7d63bdcbadf22ee1f12d149e06cf86eec9b2d1bd764b9d32e156aaa6df690dfe] <==
	I1124 09:29:28.440176       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:29:28.517312       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:29:28.617998       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:28.618036       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 09:29:28.618150       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:29:28.646964       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:29:28.647027       1 server_linux.go:136] "Using iptables Proxier"
	I1124 09:29:28.653439       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:29:28.653863       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 09:29:28.653884       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:29:28.655316       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:29:28.655375       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:29:28.655435       1 config.go:309] "Starting node config controller"
	I1124 09:29:28.655490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:29:28.655517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:29:28.655599       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:29:28.655617       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:29:28.655994       1 config.go:200] "Starting service config controller"
	I1124 09:29:28.656020       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:29:28.755571       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:29:28.756234       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:29:28.756346       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ad41a5ac915a2420a94ca88b9c3279566a6e896889754dc508c89ee3c9211e9] <==
	I1124 09:29:25.794995       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:29:27.418384       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:29:27.418443       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:29:27.418457       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:29:27.418466       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:29:27.456091       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1124 09:29:27.456120       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:29:27.459104       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:29:27.459183       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:29:27.459326       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:29:27.459422       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:29:27.560232       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 09:29:39 no-preload-938348 kubelet[703]: I1124 09:29:39.992461     703 scope.go:122] "RemoveContainer" containerID="e2ae6af8996f0d611a1b9d799a16b76c535d7e56719f3aecfdbf41ad8923add5"
	Nov 24 09:29:39 no-preload-938348 kubelet[703]: E1124 09:29:39.992515     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:39 no-preload-938348 kubelet[703]: I1124 09:29:39.992532     703 scope.go:122] "RemoveContainer" containerID="95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301"
	Nov 24 09:29:39 no-preload-938348 kubelet[703]: E1124 09:29:39.992696     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zctmn_kubernetes-dashboard(8146347a-0836-4c61-8fe2-5840f7a38ebc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" podUID="8146347a-0836-4c61-8fe2-5840f7a38ebc"
	Nov 24 09:29:40 no-preload-938348 kubelet[703]: E1124 09:29:40.996662     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:40 no-preload-938348 kubelet[703]: I1124 09:29:40.996692     703 scope.go:122] "RemoveContainer" containerID="95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301"
	Nov 24 09:29:40 no-preload-938348 kubelet[703]: E1124 09:29:40.996839     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zctmn_kubernetes-dashboard(8146347a-0836-4c61-8fe2-5840f7a38ebc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" podUID="8146347a-0836-4c61-8fe2-5840f7a38ebc"
	Nov 24 09:29:43 no-preload-938348 kubelet[703]: E1124 09:29:43.563654     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:43 no-preload-938348 kubelet[703]: I1124 09:29:43.563696     703 scope.go:122] "RemoveContainer" containerID="95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301"
	Nov 24 09:29:43 no-preload-938348 kubelet[703]: E1124 09:29:43.563902     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zctmn_kubernetes-dashboard(8146347a-0836-4c61-8fe2-5840f7a38ebc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" podUID="8146347a-0836-4c61-8fe2-5840f7a38ebc"
	Nov 24 09:29:51 no-preload-938348 kubelet[703]: E1124 09:29:51.911741     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:51 no-preload-938348 kubelet[703]: I1124 09:29:51.911784     703 scope.go:122] "RemoveContainer" containerID="95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301"
	Nov 24 09:29:52 no-preload-938348 kubelet[703]: I1124 09:29:52.026200     703 scope.go:122] "RemoveContainer" containerID="95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301"
	Nov 24 09:29:52 no-preload-938348 kubelet[703]: E1124 09:29:52.026459     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:52 no-preload-938348 kubelet[703]: I1124 09:29:52.026499     703 scope.go:122] "RemoveContainer" containerID="dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9"
	Nov 24 09:29:52 no-preload-938348 kubelet[703]: E1124 09:29:52.026701     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zctmn_kubernetes-dashboard(8146347a-0836-4c61-8fe2-5840f7a38ebc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" podUID="8146347a-0836-4c61-8fe2-5840f7a38ebc"
	Nov 24 09:29:53 no-preload-938348 kubelet[703]: E1124 09:29:53.564298     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:53 no-preload-938348 kubelet[703]: I1124 09:29:53.564378     703 scope.go:122] "RemoveContainer" containerID="dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9"
	Nov 24 09:29:53 no-preload-938348 kubelet[703]: E1124 09:29:53.564622     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zctmn_kubernetes-dashboard(8146347a-0836-4c61-8fe2-5840f7a38ebc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" podUID="8146347a-0836-4c61-8fe2-5840f7a38ebc"
	Nov 24 09:29:59 no-preload-938348 kubelet[703]: I1124 09:29:59.048478     703 scope.go:122] "RemoveContainer" containerID="81f0e973a9ec09fc3a5946da26b648d86558c8a265baebb6184fd3c1df1a5ddf"
	Nov 24 09:30:01 no-preload-938348 kubelet[703]: E1124 09:30:01.365224     703 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ll2c4" containerName="coredns"
	Nov 24 09:30:16 no-preload-938348 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:30:16 no-preload-938348 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:30:16 no-preload-938348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:30:16 no-preload-938348 systemd[1]: kubelet.service: Consumed 1.725s CPU time.
	
	
	==> kubernetes-dashboard [e11a1702d22ad7eb2911cd8bf1695d4428ffe0b597b74c43c44c0dd577f07792] <==
	2025/11/24 09:29:35 Using namespace: kubernetes-dashboard
	2025/11/24 09:29:35 Using in-cluster config to connect to apiserver
	2025/11/24 09:29:35 Using secret token for csrf signing
	2025/11/24 09:29:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 09:29:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 09:29:35 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/11/24 09:29:35 Generating JWE encryption key
	2025/11/24 09:29:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 09:29:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 09:29:36 Initializing JWE encryption key from synchronized object
	2025/11/24 09:29:36 Creating in-cluster Sidecar client
	2025/11/24 09:29:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:29:36 Serving insecurely on HTTP port: 9090
	2025/11/24 09:30:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:29:35 Starting overwatch
	
	
	==> storage-provisioner [81f0e973a9ec09fc3a5946da26b648d86558c8a265baebb6184fd3c1df1a5ddf] <==
	I1124 09:29:28.469029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:29:58.471738       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110] <==
	I1124 09:29:59.093159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:29:59.102367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:29:59.102443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:29:59.104705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:02.559580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:06.819973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:10.418653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:13.472264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:16.495788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:16.502689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:30:16.502866       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:30:16.503048       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-938348_960bf938-2f88-4278-bf80-b1cdbb186ced!
	I1124 09:30:16.503397       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3e597c6-18df-436b-9dd0-bf6a334e6e38", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-938348_960bf938-2f88-4278-bf80-b1cdbb186ced became leader
	W1124 09:30:16.508260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:16.514204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:30:16.604058       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-938348_960bf938-2f88-4278-bf80-b1cdbb186ced!
	W1124 09:30:18.517207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:18.559320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-938348 -n no-preload-938348
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-938348 -n no-preload-938348: exit status 2 (350.410015ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-938348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-938348
helpers_test.go:243: (dbg) docker inspect no-preload-938348:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761",
	        "Created": "2025-11-24T09:28:01.464607298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333804,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:29:17.27545814Z",
	            "FinishedAt": "2025-11-24T09:29:15.974768209Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/hosts",
	        "LogPath": "/var/lib/docker/containers/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761/c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761-json.log",
	        "Name": "/no-preload-938348",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-938348:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-938348",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1c5f9bb92d9629a5d28f524d1891d816119ebde9351bc226361265052ff4761",
	                "LowerDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1696e8a37774e5ae1fdda52f486b67f85100be9671299f9b87f52503c33413ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-938348",
	                "Source": "/var/lib/docker/volumes/no-preload-938348/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-938348",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-938348",
	                "name.minikube.sigs.k8s.io": "no-preload-938348",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4c94100bc69470817b1f2be0d46cb001b149f198cb2bd347aa463023348db86c",
	            "SandboxKey": "/var/run/docker/netns/4c94100bc694",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-938348": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3f03f3b5e2bfb0cd68097788ad47d94eb14c12cf815ca0f14753094201a5fac2",
	                    "EndpointID": "44487318792c4f8d1f6132748d3c811aa814aeca9ddd2c393ee790bb5cb45f14",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "52:51:36:52:ba:36",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-938348",
	                        "c1c5f9bb92d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-938348 -n no-preload-938348
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-938348 -n no-preload-938348: exit status 2 (369.960768ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-938348 logs -n 25
E1124 09:30:22.522934    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:30:22.529290    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-938348 logs -n 25: (1.14702177s)
E1124 09:30:22.541371    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ start   │ -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable dashboard -p no-preload-938348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p kubernetes-upgrade-967467                                                                                                                                                                                                                         │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-164377 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p newest-cni-639420 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-639420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ newest-cni-639420 image list --format=json                                                                                                                                                                                                           │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p newest-cni-639420 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ image   │ no-preload-938348 image list --format=json                                                                                                                                                                                                           │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p no-preload-938348 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p newest-cni-639420                                                                                                                                                                                                                                 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:30:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:30:14.256245  350567 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:14.256374  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256383  350567 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:14.256387  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256590  350567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:14.257068  350567 out.go:368] Setting JSON to false
	I1124 09:30:14.258256  350567 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4360,"bootTime":1763972254,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:30:14.258310  350567 start.go:143] virtualization: kvm guest
	I1124 09:30:14.260266  350567 out.go:179] * [embed-certs-673346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:30:14.261445  350567 notify.go:221] Checking for updates...
	I1124 09:30:14.261485  350567 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:30:14.262753  350567 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:30:14.264083  350567 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:14.265432  350567 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:30:14.266629  350567 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:30:14.268064  350567 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:30:14.269699  350567 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:14.269849  350567 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.269945  350567 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.270033  350567 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:30:14.295962  350567 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:30:14.296062  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.353929  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.34315637 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.354017  350567 docker.go:319] overlay module found
	I1124 09:30:14.355843  350567 out.go:179] * Using the docker driver based on user configuration
	I1124 09:30:14.357036  350567 start.go:309] selected driver: docker
	I1124 09:30:14.357055  350567 start.go:927] validating driver "docker" against <nil>
	I1124 09:30:14.357071  350567 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:30:14.357913  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.421846  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.410748585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.422058  350567 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:30:14.422268  350567 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:14.423788  350567 out.go:179] * Using Docker driver with root privileges
	I1124 09:30:14.424821  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.424879  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.424889  350567 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:30:14.424949  350567 start.go:353] cluster config:
	{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:14.426196  350567 out.go:179] * Starting "embed-certs-673346" primary control-plane node in "embed-certs-673346" cluster
	I1124 09:30:14.427568  350567 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:30:14.428764  350567 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:30:14.430011  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:14.430039  350567 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:30:14.430057  350567 cache.go:65] Caching tarball of preloaded images
	I1124 09:30:14.430101  350567 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:30:14.430158  350567 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:30:14.430171  350567 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:30:14.430275  350567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:30:14.430300  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json: {Name:mk0422b133bc5e40a804c0d52d08ba9c0b2ed1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.453692  350567 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:30:14.453709  350567 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:30:14.453740  350567 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:30:14.453787  350567 start.go:360] acquireMachinesLock for embed-certs-673346: {Name:mke42f7eda6495a6293833a93353c50b3546b267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:30:14.453896  350567 start.go:364] duration metric: took 91.14µs to acquireMachinesLock for "embed-certs-673346"
	I1124 09:30:14.453926  350567 start.go:93] Provisioning new machine with config: &{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:14.453996  350567 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:30:13.147546  346330 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-164377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:13.167771  346330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:13.172050  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:13.182388  346330 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:13.182659  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.335407  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.491838  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.647119  346330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:13.647243  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.846371  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.028841  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.344499  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.385375  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.385396  346330 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:30:14.385438  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.415659  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.415679  346330 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:14.415687  346330 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1124 09:30:14.415796  346330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-164377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:14.415855  346330 ssh_runner.go:195] Run: crio config
	I1124 09:30:14.467415  346330 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.467440  346330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.467457  346330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:30:14.467485  346330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-164377 NodeName:default-k8s-diff-port-164377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:14.467665  346330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-164377"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:14.467740  346330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:30:14.477297  346330 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:14.477386  346330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:14.486666  346330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 09:30:14.501581  346330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:30:14.516622  346330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 09:30:14.531939  346330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:14.536699  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:14.551687  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:14.653461  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:14.689043  346330 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377 for IP: 192.168.85.2
	I1124 09:30:14.689069  346330 certs.go:195] generating shared ca certs ...
	I1124 09:30:14.689088  346330 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.689257  346330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:14.689322  346330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:14.689350  346330 certs.go:257] generating profile certs ...
	I1124 09:30:14.689449  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/client.key
	I1124 09:30:14.689523  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key.5d8312b5
	I1124 09:30:14.689584  346330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key
	I1124 09:30:14.689713  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:14.689756  346330 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:14.689770  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:14.689805  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:14.689846  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:14.689877  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:14.689936  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:14.690834  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:14.713491  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:14.733133  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:14.755304  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:14.781644  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 09:30:14.807149  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:30:14.826555  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:14.849289  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:30:14.868866  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:14.900899  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:14.927265  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:14.951934  346330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:14.968305  346330 ssh_runner.go:195] Run: openssl version
	I1124 09:30:14.977188  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:14.988887  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993793  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993849  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:15.044783  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:15.062885  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:15.073450  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078558  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078611  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.125021  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:15.134555  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:15.145840  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150712  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150766  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.193031  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:15.203009  346330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:15.208170  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:30:15.268668  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:30:15.330529  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:30:15.386730  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:30:15.450702  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:30:15.510222  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:30:15.573346  346330 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:15.573548  346330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:15.573633  346330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:15.617052  346330 cri.go:89] found id: "dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8"
	I1124 09:30:15.617070  346330 cri.go:89] found id: "892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99"
	I1124 09:30:15.617076  346330 cri.go:89] found id: "4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe"
	I1124 09:30:15.617088  346330 cri.go:89] found id: "4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7"
	I1124 09:30:15.617092  346330 cri.go:89] found id: ""
	I1124 09:30:15.617135  346330 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:30:15.636984  346330 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:15Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:15.638440  346330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:15.649204  346330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:30:15.649226  346330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:30:15.649270  346330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:30:15.663887  346330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:30:15.664735  346330 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-164377" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.665194  346330 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-164377" cluster setting kubeconfig missing "default-k8s-diff-port-164377" context setting]
	I1124 09:30:15.666227  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.668691  346330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:30:15.680140  346330 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 09:30:15.680180  346330 kubeadm.go:602] duration metric: took 30.938163ms to restartPrimaryControlPlane
	I1124 09:30:15.680189  346330 kubeadm.go:403] duration metric: took 106.868907ms to StartCluster
	I1124 09:30:15.680202  346330 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.680258  346330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.681803  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.682046  346330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:15.682240  346330 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:15.682422  346330 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682447  346330 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682456  346330 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:30:15.682523  346330 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682554  346330 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682573  346330 addons.go:248] addon dashboard should already be in state true
	I1124 09:30:15.682612  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.682679  346330 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:15.682721  346330 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682735  346330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-164377"
	I1124 09:30:15.683004  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683176  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683179  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.683615  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683830  346330 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:15.685123  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:15.719127  346330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:30:15.719844  346330 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.719950  346330 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:30:15.720006  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.720557  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.721200  346330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:30:15.721225  346330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:30:15.722276  346330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.722291  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:30:15.722368  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.722497  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:30:15.722505  346330 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:30:15.722550  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.760598  346330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:15.760694  346330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:30:15.760791  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.761102  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.768663  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.809271  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.913227  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:15.931974  346330 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:30:15.958496  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:30:15.958523  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:30:15.961696  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.982191  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:30:15.982217  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:30:15.984451  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:16.003515  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:30:16.003603  346330 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:30:16.025926  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:30:16.025949  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:30:16.049115  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:30:16.049141  346330 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:30:16.070292  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:30:16.070316  346330 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:30:16.087883  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:30:16.087909  346330 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:30:16.107837  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:30:16.107859  346330 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:30:16.130726  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:30:16.130811  346330 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:30:16.152225  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:30:14.455914  350567 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:30:14.456108  350567 start.go:159] libmachine.API.Create for "embed-certs-673346" (driver="docker")
	I1124 09:30:14.456138  350567 client.go:173] LocalClient.Create starting
	I1124 09:30:14.456212  350567 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem
	I1124 09:30:14.456244  350567 main.go:143] libmachine: Decoding PEM data...
	I1124 09:30:14.456264  350567 main.go:143] libmachine: Parsing certificate...
	I1124 09:30:14.456310  350567 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem
	I1124 09:30:14.456355  350567 main.go:143] libmachine: Decoding PEM data...
	I1124 09:30:14.456379  350567 main.go:143] libmachine: Parsing certificate...
	I1124 09:30:14.456793  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:30:14.478660  350567 cli_runner.go:211] docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:30:14.478755  350567 network_create.go:284] running [docker network inspect embed-certs-673346] to gather additional debugging logs...
	I1124 09:30:14.478786  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346
	W1124 09:30:14.498235  350567 cli_runner.go:211] docker network inspect embed-certs-673346 returned with exit code 1
	I1124 09:30:14.498267  350567 network_create.go:287] error running [docker network inspect embed-certs-673346]: docker network inspect embed-certs-673346: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-673346 not found
	I1124 09:30:14.498281  350567 network_create.go:289] output of [docker network inspect embed-certs-673346]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-673346 not found
	
	** /stderr **
	I1124 09:30:14.498385  350567 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:14.520018  350567 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2543a3a5b30f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:09:61:f4:32:5e} reservation:<nil>}
	I1124 09:30:14.520793  350567 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c977c796f084 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:34:cc:6d:f9:2b} reservation:<nil>}
	I1124 09:30:14.521788  350567 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2994a163bb80 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:ca:61:f0:c2:2e} reservation:<nil>}
	I1124 09:30:14.522707  350567 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5a70}
	I1124 09:30:14.522732  350567 network_create.go:124] attempt to create docker network embed-certs-673346 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 09:30:14.522785  350567 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-673346 embed-certs-673346
	I1124 09:30:14.586516  350567 network_create.go:108] docker network embed-certs-673346 192.168.76.0/24 created
	I1124 09:30:14.586547  350567 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-673346" container
	I1124 09:30:14.586627  350567 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:30:14.610804  350567 cli_runner.go:164] Run: docker volume create embed-certs-673346 --label name.minikube.sigs.k8s.io=embed-certs-673346 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:30:14.632832  350567 oci.go:103] Successfully created a docker volume embed-certs-673346
	I1124 09:30:14.632925  350567 cli_runner.go:164] Run: docker run --rm --name embed-certs-673346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-673346 --entrypoint /usr/bin/test -v embed-certs-673346:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:30:15.090593  350567 oci.go:107] Successfully prepared a docker volume embed-certs-673346
	I1124 09:30:15.090677  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:15.090690  350567 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:30:15.090748  350567 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-673346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:30:18.723622  346330 node_ready.go:49] node "default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:18.723658  346330 node_ready.go:38] duration metric: took 2.791273581s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:30:18.723674  346330 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:30:18.723726  346330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:30:19.762798  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.801068095s)
	I1124 09:30:19.762854  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.778333862s)
	I1124 09:30:19.809952  346330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.086204016s)
	I1124 09:30:19.809990  346330 api_server.go:72] duration metric: took 4.127914679s to wait for apiserver process to appear ...
	I1124 09:30:19.809999  346330 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:30:19.810019  346330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:30:19.810840  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.65851014s)
	I1124 09:30:19.812854  346330 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-164377 addons enable metrics-server
	
	I1124 09:30:19.814608  346330 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 09:30:19.815981  346330 addons.go:530] duration metric: took 4.133745613s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 09:30:19.819288  346330 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:30:19.819490  346330 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:30:20.310801  346330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:30:20.318184  346330 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 09:30:20.320089  346330 api_server.go:141] control plane version: v1.34.2
	I1124 09:30:20.320238  346330 api_server.go:131] duration metric: took 510.229099ms to wait for apiserver health ...
	I1124 09:30:20.320485  346330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:30:20.328441  346330 system_pods.go:59] 8 kube-system pods found
	I1124 09:30:20.328478  346330 system_pods.go:61] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:20.328490  346330 system_pods.go:61] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:30:20.328498  346330 system_pods.go:61] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:30:20.328506  346330 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:30:20.328515  346330 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:30:20.328521  346330 system_pods.go:61] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:30:20.328529  346330 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:30:20.328534  346330 system_pods.go:61] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Running
	I1124 09:30:20.328541  346330 system_pods.go:74] duration metric: took 7.85104ms to wait for pod list to return data ...
	I1124 09:30:20.328554  346330 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:30:20.332981  346330 default_sa.go:45] found service account: "default"
	I1124 09:30:20.333009  346330 default_sa.go:55] duration metric: took 4.449084ms for default service account to be created ...
	I1124 09:30:20.333021  346330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:30:20.338641  346330 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:20.338682  346330 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:20.338698  346330 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:30:20.338709  346330 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:30:20.338718  346330 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:30:20.338727  346330 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:30:20.338734  346330 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:30:20.338741  346330 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:30:20.338747  346330 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Running
	I1124 09:30:20.338757  346330 system_pods.go:126] duration metric: took 5.728957ms to wait for k8s-apps to be running ...
	I1124 09:30:20.338767  346330 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:30:20.338820  346330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:20.357733  346330 system_svc.go:56] duration metric: took 18.956624ms WaitForService to wait for kubelet
	I1124 09:30:20.358599  346330 kubeadm.go:587] duration metric: took 4.676515085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:20.358629  346330 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:30:20.363231  346330 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:30:20.363257  346330 node_conditions.go:123] node cpu capacity is 8
	I1124 09:30:20.363289  346330 node_conditions.go:105] duration metric: took 4.654352ms to run NodePressure ...
	I1124 09:30:20.363303  346330 start.go:242] waiting for startup goroutines ...
	I1124 09:30:20.363313  346330 start.go:247] waiting for cluster config update ...
	I1124 09:30:20.363345  346330 start.go:256] writing updated cluster config ...
	I1124 09:30:20.363650  346330 ssh_runner.go:195] Run: rm -f paused
	I1124 09:30:20.369452  346330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:20.373717  346330 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Nov 24 09:29:39 no-preload-938348 crio[558]: time="2025-11-24T09:29:39.033204838Z" level=info msg="Started container" PID=1715 containerID=95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper id=f9b4a1ec-2e15-4838-9ee2-4eb1d75b43d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed52dcb6e98027692222c21662bb8b91f3553eb34467636c28eabf3bbf859c1f
	Nov 24 09:29:39 no-preload-938348 crio[558]: time="2025-11-24T09:29:39.993682163Z" level=info msg="Removing container: e2ae6af8996f0d611a1b9d799a16b76c535d7e56719f3aecfdbf41ad8923add5" id=adc8e267-f0af-4a9b-b29c-cedf3027fa9b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:40 no-preload-938348 crio[558]: time="2025-11-24T09:29:40.004412562Z" level=info msg="Removed container e2ae6af8996f0d611a1b9d799a16b76c535d7e56719f3aecfdbf41ad8923add5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper" id=adc8e267-f0af-4a9b-b29c-cedf3027fa9b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.912421649Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=606b44e5-6297-4c4d-88f5-ca1b39867c67 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.914856235Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e81eff4b-26aa-4652-a30f-ce63ebd8b8db name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.917678619Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper" id=d899827b-cc3c-4878-813f-22d219aa8613 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.917810305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.925074462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.925537592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.957813423Z" level=info msg="Created container dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper" id=d899827b-cc3c-4878-813f-22d219aa8613 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.958485796Z" level=info msg="Starting container: dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9" id=895e23bc-5f64-4125-b0cf-e3d2996bc836 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:51 no-preload-938348 crio[558]: time="2025-11-24T09:29:51.96046376Z" level=info msg="Started container" PID=1725 containerID=dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper id=895e23bc-5f64-4125-b0cf-e3d2996bc836 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed52dcb6e98027692222c21662bb8b91f3553eb34467636c28eabf3bbf859c1f
	Nov 24 09:29:52 no-preload-938348 crio[558]: time="2025-11-24T09:29:52.027777362Z" level=info msg="Removing container: 95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301" id=74b51c45-f593-4417-86a4-c0673e015964 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:52 no-preload-938348 crio[558]: time="2025-11-24T09:29:52.038707991Z" level=info msg="Removed container 95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn/dashboard-metrics-scraper" id=74b51c45-f593-4417-86a4-c0673e015964 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.048930093Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c522083e-8b5f-417d-b604-3735e8a7f46f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.049996058Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=14ab401e-6837-4d35-afcb-713564998855 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.05116027Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=481ea68f-be25-4591-a653-03c837c42acd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.051359016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.056883965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.057026073Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f7024a85c01d479aa89999624ed4f9001020362b1cc9c2140dab3d01177b98dd/merged/etc/passwd: no such file or directory"
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.057058261Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f7024a85c01d479aa89999624ed4f9001020362b1cc9c2140dab3d01177b98dd/merged/etc/group: no such file or directory"
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.057357345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.075656249Z" level=info msg="Created container 9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110: kube-system/storage-provisioner/storage-provisioner" id=481ea68f-be25-4591-a653-03c837c42acd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.076255902Z" level=info msg="Starting container: 9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110" id=d99d5924-1d27-45ba-92fb-d48c503b91de name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:29:59 no-preload-938348 crio[558]: time="2025-11-24T09:29:59.078389177Z" level=info msg="Started container" PID=1739 containerID=9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110 description=kube-system/storage-provisioner/storage-provisioner id=d99d5924-1d27-45ba-92fb-d48c503b91de name=/runtime.v1.RuntimeService/StartContainer sandboxID=3567b00139c3d5644bcfecf2c5aa8f48ca08abc46999ae7e5d154f7fec55486a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9b04daab41016       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   3567b00139c3d       storage-provisioner                          kube-system
	dc1ddeaa7bc8b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   ed52dcb6e9802       dashboard-metrics-scraper-867fb5f87b-zctmn   kubernetes-dashboard
	e11a1702d22ad       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   448dc164e1c53       kubernetes-dashboard-b84665fb8-stj4z         kubernetes-dashboard
	81f0e973a9ec0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   3567b00139c3d       storage-provisioner                          kube-system
	0d54153cfa237       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   3dace2cd67898       kindnet-zrnnf                                kube-system
	f5652149b9a3f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   49bc113326df3       busybox                                      default
	db9a9aba0693b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           53 seconds ago      Running             coredns                     0                   9a712dbcc40cd       coredns-7d764666f9-ll2c4                     kube-system
	7d63bdcbadf22       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           53 seconds ago      Running             kube-proxy                  0                   0c3d323eeb47f       kube-proxy-smqgp                             kube-system
	3ad41a5ac915a       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           56 seconds ago      Running             kube-scheduler              0                   2d80122a6949a       kube-scheduler-no-preload-938348             kube-system
	36d1fad884886       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   d5e3549eb141f       etcd-no-preload-938348                       kube-system
	a9fb0f7c0718d       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           56 seconds ago      Running             kube-controller-manager     0                   0a34a5cb54cdf       kube-controller-manager-no-preload-938348    kube-system
	bfa3e672acac8       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           56 seconds ago      Running             kube-apiserver              0                   5e2f3ad9b5c4d       kube-apiserver-no-preload-938348             kube-system
	
	
	==> coredns [db9a9aba0693b189be45a91992e8cb1a931ecc959206bcf32d3ea78e9ee78cab] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56180 - 23550 "HINFO IN 7823894593425671882.5183944969165709600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054788306s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-938348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-938348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=no-preload-938348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_28_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:28:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-938348
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:30:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:29:58 +0000   Mon, 24 Nov 2025 09:28:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:29:58 +0000   Mon, 24 Nov 2025 09:28:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:29:58 +0000   Mon, 24 Nov 2025 09:28:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:29:58 +0000   Mon, 24 Nov 2025 09:28:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-938348
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                5a3d5c1e-1c49-4ac3-aca7-a3f8db3c500c
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-ll2c4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-938348                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-zrnnf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-938348              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-938348     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-smqgp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-938348              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-zctmn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-stj4z          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node no-preload-938348 event: Registered Node no-preload-938348 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node no-preload-938348 event: Registered Node no-preload-938348 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [36d1fad8848862ea43c7b05032173e3e3b7f0933dc08295c02778fb4b025a652] <==
	{"level":"warn","ts":"2025-11-24T09:29:26.796430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.804910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.812754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.819758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.826823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.836766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.844606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.852278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.859140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.865874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.872676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.879859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.886858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.893241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.900256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.908202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.915029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.922049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.935087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.941655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.948991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:26.956255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:29:27.008835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:19.000064Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.101484ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766384287734852 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.94.2\" mod_revision:636 > success:<request_put:<key:\"/registry/masterleases/192.168.94.2\" value_size:65 lease:6571766384287734850 >> failure:<request_range:<key:\"/registry/masterleases/192.168.94.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:30:19.000200Z","caller":"traceutil/trace.go:172","msg":"trace[616356095] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"259.970818ms","start":"2025-11-24T09:30:18.740212Z","end":"2025-11-24T09:30:19.000182Z","steps":["trace[616356095] 'process raft request'  (duration: 149.274849ms)","trace[616356095] 'compare'  (duration: 109.995024ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:30:22 up  1:12,  0 user,  load average: 5.15, 3.66, 2.33
	Linux no-preload-938348 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0d54153cfa2374cf3ebf9c6ea76683457f7cb271d29888cbab4b9e2c932c6fc1] <==
	I1124 09:29:28.723938       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:29:28.724213       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 09:29:28.724450       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:29:28.724469       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:29:28.724497       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:29:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:29:28.925299       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:29:28.925366       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:29:28.925388       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:29:28.926259       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:29:29.325810       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:29:29.325854       1 metrics.go:72] Registering metrics
	I1124 09:29:29.325990       1 controller.go:711] "Syncing nftables rules"
	I1124 09:29:38.925417       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:29:38.925536       1 main.go:301] handling current node
	I1124 09:29:48.932006       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:29:48.932046       1 main.go:301] handling current node
	I1124 09:29:58.925462       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:29:58.925499       1 main.go:301] handling current node
	I1124 09:30:08.931489       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:30:08.931544       1 main.go:301] handling current node
	I1124 09:30:18.934445       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:30:18.934477       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bfa3e672acac8938f9e806c8ee2b3dfe80d66448b24724b3bbf29f8c10551751] <==
	I1124 09:29:27.488928       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 09:29:27.489451       1 aggregator.go:187] initial CRD sync complete...
	I1124 09:29:27.489462       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 09:29:27.489467       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:29:27.489474       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:29:27.489760       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 09:29:27.488633       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 09:29:27.490160       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 09:29:27.488939       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 09:29:27.495539       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1124 09:29:27.498784       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:29:27.508594       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:29:27.512726       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:29:27.795320       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:29:27.847999       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:29:27.886868       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:29:27.908844       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:29:27.920188       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:29:27.989202       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.208.117"}
	I1124 09:29:28.013301       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.170.104"}
	I1124 09:29:28.392822       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1124 09:29:31.076435       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:29:31.224031       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:29:31.224038       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:29:31.324259       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a9fb0f7c0718dd8bc54d167231997e0c85b183e6aa45ef9d18e4350114c5d548] <==
	I1124 09:29:30.641465       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.633086       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.639766       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.641741       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.640175       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.640086       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.641857       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.639890       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.642195       1 range_allocator.go:177] "Sending events to api server"
	I1124 09:29:30.642298       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1124 09:29:30.642306       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:29:30.642312       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.638484       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.639381       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.639268       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.640019       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.640281       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.641791       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.633065       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.633076       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.647600       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.732426       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.735740       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:30.735760       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 09:29:30.735768       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [7d63bdcbadf22ee1f12d149e06cf86eec9b2d1bd764b9d32e156aaa6df690dfe] <==
	I1124 09:29:28.440176       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:29:28.517312       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:29:28.617998       1 shared_informer.go:377] "Caches are synced"
	I1124 09:29:28.618036       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 09:29:28.618150       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:29:28.646964       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:29:28.647027       1 server_linux.go:136] "Using iptables Proxier"
	I1124 09:29:28.653439       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:29:28.653863       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 09:29:28.653884       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:29:28.655316       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:29:28.655375       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:29:28.655435       1 config.go:309] "Starting node config controller"
	I1124 09:29:28.655490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:29:28.655517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:29:28.655599       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:29:28.655617       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:29:28.655994       1 config.go:200] "Starting service config controller"
	I1124 09:29:28.656020       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:29:28.755571       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:29:28.756234       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:29:28.756346       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ad41a5ac915a2420a94ca88b9c3279566a6e896889754dc508c89ee3c9211e9] <==
	I1124 09:29:25.794995       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:29:27.418384       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:29:27.418443       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:29:27.418457       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:29:27.418466       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:29:27.456091       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1124 09:29:27.456120       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:29:27.459104       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:29:27.459183       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:29:27.459326       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:29:27.459422       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:29:27.560232       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 09:29:39 no-preload-938348 kubelet[703]: I1124 09:29:39.992461     703 scope.go:122] "RemoveContainer" containerID="e2ae6af8996f0d611a1b9d799a16b76c535d7e56719f3aecfdbf41ad8923add5"
	Nov 24 09:29:39 no-preload-938348 kubelet[703]: E1124 09:29:39.992515     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:39 no-preload-938348 kubelet[703]: I1124 09:29:39.992532     703 scope.go:122] "RemoveContainer" containerID="95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301"
	Nov 24 09:29:39 no-preload-938348 kubelet[703]: E1124 09:29:39.992696     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zctmn_kubernetes-dashboard(8146347a-0836-4c61-8fe2-5840f7a38ebc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" podUID="8146347a-0836-4c61-8fe2-5840f7a38ebc"
	Nov 24 09:29:40 no-preload-938348 kubelet[703]: E1124 09:29:40.996662     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:40 no-preload-938348 kubelet[703]: I1124 09:29:40.996692     703 scope.go:122] "RemoveContainer" containerID="95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301"
	Nov 24 09:29:40 no-preload-938348 kubelet[703]: E1124 09:29:40.996839     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zctmn_kubernetes-dashboard(8146347a-0836-4c61-8fe2-5840f7a38ebc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" podUID="8146347a-0836-4c61-8fe2-5840f7a38ebc"
	Nov 24 09:29:43 no-preload-938348 kubelet[703]: E1124 09:29:43.563654     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:43 no-preload-938348 kubelet[703]: I1124 09:29:43.563696     703 scope.go:122] "RemoveContainer" containerID="95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301"
	Nov 24 09:29:43 no-preload-938348 kubelet[703]: E1124 09:29:43.563902     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zctmn_kubernetes-dashboard(8146347a-0836-4c61-8fe2-5840f7a38ebc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" podUID="8146347a-0836-4c61-8fe2-5840f7a38ebc"
	Nov 24 09:29:51 no-preload-938348 kubelet[703]: E1124 09:29:51.911741     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:51 no-preload-938348 kubelet[703]: I1124 09:29:51.911784     703 scope.go:122] "RemoveContainer" containerID="95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301"
	Nov 24 09:29:52 no-preload-938348 kubelet[703]: I1124 09:29:52.026200     703 scope.go:122] "RemoveContainer" containerID="95ff74a770bb18820586d8bbb0ca246f10f4170b2ff894ec934fb7a1d53ca301"
	Nov 24 09:29:52 no-preload-938348 kubelet[703]: E1124 09:29:52.026459     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:52 no-preload-938348 kubelet[703]: I1124 09:29:52.026499     703 scope.go:122] "RemoveContainer" containerID="dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9"
	Nov 24 09:29:52 no-preload-938348 kubelet[703]: E1124 09:29:52.026701     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zctmn_kubernetes-dashboard(8146347a-0836-4c61-8fe2-5840f7a38ebc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" podUID="8146347a-0836-4c61-8fe2-5840f7a38ebc"
	Nov 24 09:29:53 no-preload-938348 kubelet[703]: E1124 09:29:53.564298     703 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" containerName="dashboard-metrics-scraper"
	Nov 24 09:29:53 no-preload-938348 kubelet[703]: I1124 09:29:53.564378     703 scope.go:122] "RemoveContainer" containerID="dc1ddeaa7bc8b80f3d15a07dd9d2253d171da6fb20cc40112107747006bea0d9"
	Nov 24 09:29:53 no-preload-938348 kubelet[703]: E1124 09:29:53.564622     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zctmn_kubernetes-dashboard(8146347a-0836-4c61-8fe2-5840f7a38ebc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zctmn" podUID="8146347a-0836-4c61-8fe2-5840f7a38ebc"
	Nov 24 09:29:59 no-preload-938348 kubelet[703]: I1124 09:29:59.048478     703 scope.go:122] "RemoveContainer" containerID="81f0e973a9ec09fc3a5946da26b648d86558c8a265baebb6184fd3c1df1a5ddf"
	Nov 24 09:30:01 no-preload-938348 kubelet[703]: E1124 09:30:01.365224     703 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ll2c4" containerName="coredns"
	Nov 24 09:30:16 no-preload-938348 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:30:16 no-preload-938348 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:30:16 no-preload-938348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:30:16 no-preload-938348 systemd[1]: kubelet.service: Consumed 1.725s CPU time.
	
	
	==> kubernetes-dashboard [e11a1702d22ad7eb2911cd8bf1695d4428ffe0b597b74c43c44c0dd577f07792] <==
	2025/11/24 09:29:35 Using namespace: kubernetes-dashboard
	2025/11/24 09:29:35 Using in-cluster config to connect to apiserver
	2025/11/24 09:29:35 Using secret token for csrf signing
	2025/11/24 09:29:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 09:29:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 09:29:35 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/11/24 09:29:35 Generating JWE encryption key
	2025/11/24 09:29:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 09:29:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 09:29:36 Initializing JWE encryption key from synchronized object
	2025/11/24 09:29:36 Creating in-cluster Sidecar client
	2025/11/24 09:29:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:29:36 Serving insecurely on HTTP port: 9090
	2025/11/24 09:30:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:29:35 Starting overwatch
	
	
	==> storage-provisioner [81f0e973a9ec09fc3a5946da26b648d86558c8a265baebb6184fd3c1df1a5ddf] <==
	I1124 09:29:28.469029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:29:58.471738       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9b04daab41016de97664bfff3ac8d00187ec2a2d85c4cb90395b0480ef2f4110] <==
	I1124 09:29:59.093159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:29:59.102367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:29:59.102443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:29:59.104705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:02.559580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:06.819973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:10.418653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:13.472264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:16.495788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:16.502689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:30:16.502866       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:30:16.503048       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-938348_960bf938-2f88-4278-bf80-b1cdbb186ced!
	I1124 09:30:16.503397       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3e597c6-18df-436b-9dd0-bf6a334e6e38", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-938348_960bf938-2f88-4278-bf80-b1cdbb186ced became leader
	W1124 09:30:16.508260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:16.514204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:30:16.604058       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-938348_960bf938-2f88-4278-bf80-b1cdbb186ced!
	W1124 09:30:18.517207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:18.559320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:20.562959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:20.566973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-938348 -n no-preload-938348
E1124 09:30:22.564902    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:30:22.606398    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:30:22.687806    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:30:22.849420    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-938348 -n no-preload-938348: exit status 2 (355.979636ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-938348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-673346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-673346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (255.844819ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:31:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_5b0fde3b61f3dfcdb425eb33a6c00f71daa98e69_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-673346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-673346 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-673346 describe deploy/metrics-server -n kube-system: exit status 1 (69.400832ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-673346 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-673346
helpers_test.go:243: (dbg) docker inspect embed-certs-673346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794",
	        "Created": "2025-11-24T09:30:19.733597004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353422,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:30:19.786846185Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/hostname",
	        "HostsPath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/hosts",
	        "LogPath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794-json.log",
	        "Name": "/embed-certs-673346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-673346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-673346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794",
	                "LowerDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-673346",
	                "Source": "/var/lib/docker/volumes/embed-certs-673346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-673346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-673346",
	                "name.minikube.sigs.k8s.io": "embed-certs-673346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c73837a76ab56431dbd3b60542b05e4bede214ed97643dd32aa4ceb084010e7c",
	            "SandboxKey": "/var/run/docker/netns/c73837a76ab5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-673346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "97d1cb035a36ae5fa5c959009087829b88d672ef46bb4e02a32ec47d72e472d5",
	                    "EndpointID": "2c6f8215348a1a85c1a59e867c2c66b795d9d1f914350aebb6e299897fc5f61e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "92:0c:77:04:43:15",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-673346",
	                        "1bda3483b0ff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-673346 -n embed-certs-673346
I1124 09:31:06.664578    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:31:06.825892    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-673346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-673346 logs -n 25: (1.056224086s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p kubernetes-upgrade-967467                                                                                                                                                                                                                         │ kubernetes-upgrade-967467    │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-164377 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p newest-cni-639420 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-639420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ newest-cni-639420 image list --format=json                                                                                                                                                                                                           │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p newest-cni-639420 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ no-preload-938348 image list --format=json                                                                                                                                                                                                           │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p no-preload-938348 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p newest-cni-639420                                                                                                                                                                                                                                 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p no-preload-938348                                                                                                                                                                                                                                 │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p newest-cni-639420                                                                                                                                                                                                                                 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p no-preload-938348                                                                                                                                                                                                                                 │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ default-k8s-diff-port-164377 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-673346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:30:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:30:14.256245  350567 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:14.256374  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256383  350567 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:14.256387  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256590  350567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:14.257068  350567 out.go:368] Setting JSON to false
	I1124 09:30:14.258256  350567 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4360,"bootTime":1763972254,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:30:14.258310  350567 start.go:143] virtualization: kvm guest
	I1124 09:30:14.260266  350567 out.go:179] * [embed-certs-673346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:30:14.261445  350567 notify.go:221] Checking for updates...
	I1124 09:30:14.261485  350567 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:30:14.262753  350567 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:30:14.264083  350567 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:14.265432  350567 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:30:14.266629  350567 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:30:14.268064  350567 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:30:14.269699  350567 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:14.269849  350567 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.269945  350567 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.270033  350567 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:30:14.295962  350567 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:30:14.296062  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.353929  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.34315637 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.354017  350567 docker.go:319] overlay module found
	I1124 09:30:14.355843  350567 out.go:179] * Using the docker driver based on user configuration
	I1124 09:30:14.357036  350567 start.go:309] selected driver: docker
	I1124 09:30:14.357055  350567 start.go:927] validating driver "docker" against <nil>
	I1124 09:30:14.357071  350567 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:30:14.357913  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.421846  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.410748585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.422058  350567 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:30:14.422268  350567 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:14.423788  350567 out.go:179] * Using Docker driver with root privileges
	I1124 09:30:14.424821  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.424879  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.424889  350567 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:30:14.424949  350567 start.go:353] cluster config:
	{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:14.426196  350567 out.go:179] * Starting "embed-certs-673346" primary control-plane node in "embed-certs-673346" cluster
	I1124 09:30:14.427568  350567 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:30:14.428764  350567 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:30:14.430011  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:14.430039  350567 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:30:14.430057  350567 cache.go:65] Caching tarball of preloaded images
	I1124 09:30:14.430101  350567 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:30:14.430158  350567 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:30:14.430171  350567 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:30:14.430275  350567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:30:14.430300  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json: {Name:mk0422b133bc5e40a804c0d52d08ba9c0b2ed1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.453692  350567 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:30:14.453709  350567 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:30:14.453740  350567 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:30:14.453787  350567 start.go:360] acquireMachinesLock for embed-certs-673346: {Name:mke42f7eda6495a6293833a93353c50b3546b267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:30:14.453896  350567 start.go:364] duration metric: took 91.14µs to acquireMachinesLock for "embed-certs-673346"
	I1124 09:30:14.453926  350567 start.go:93] Provisioning new machine with config: &{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:14.453996  350567 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:30:13.147546  346330 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-164377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:13.167771  346330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:13.172050  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:13.182388  346330 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:13.182659  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.335407  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.491838  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.647119  346330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:13.647243  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.846371  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.028841  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.344499  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.385375  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.385396  346330 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:30:14.385438  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.415659  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.415679  346330 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:14.415687  346330 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1124 09:30:14.415796  346330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-164377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:14.415855  346330 ssh_runner.go:195] Run: crio config
	I1124 09:30:14.467415  346330 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.467440  346330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.467457  346330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:30:14.467485  346330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-164377 NodeName:default-k8s-diff-port-164377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:14.467665  346330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-164377"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:14.467740  346330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:30:14.477297  346330 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:14.477386  346330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:14.486666  346330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 09:30:14.501581  346330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:30:14.516622  346330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 09:30:14.531939  346330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:14.536699  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:14.551687  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:14.653461  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:14.689043  346330 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377 for IP: 192.168.85.2
	I1124 09:30:14.689069  346330 certs.go:195] generating shared ca certs ...
	I1124 09:30:14.689088  346330 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.689257  346330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:14.689322  346330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:14.689350  346330 certs.go:257] generating profile certs ...
	I1124 09:30:14.689449  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/client.key
	I1124 09:30:14.689523  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key.5d8312b5
	I1124 09:30:14.689584  346330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key
	I1124 09:30:14.689713  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:14.689756  346330 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:14.689770  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:14.689805  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:14.689846  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:14.689877  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:14.689936  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:14.690834  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:14.713491  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:14.733133  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:14.755304  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:14.781644  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 09:30:14.807149  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:30:14.826555  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:14.849289  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:30:14.868866  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:14.900899  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:14.927265  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:14.951934  346330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:14.968305  346330 ssh_runner.go:195] Run: openssl version
	I1124 09:30:14.977188  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:14.988887  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993793  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993849  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:15.044783  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:15.062885  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:15.073450  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078558  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078611  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.125021  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:15.134555  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:15.145840  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150712  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150766  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.193031  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:15.203009  346330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:15.208170  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:30:15.268668  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:30:15.330529  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:30:15.386730  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:30:15.450702  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:30:15.510222  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:30:15.573346  346330 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:15.573548  346330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:15.573633  346330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:15.617052  346330 cri.go:89] found id: "dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8"
	I1124 09:30:15.617070  346330 cri.go:89] found id: "892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99"
	I1124 09:30:15.617076  346330 cri.go:89] found id: "4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe"
	I1124 09:30:15.617088  346330 cri.go:89] found id: "4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7"
	I1124 09:30:15.617092  346330 cri.go:89] found id: ""
	I1124 09:30:15.617135  346330 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:30:15.636984  346330 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:15Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:15.638440  346330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:15.649204  346330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:30:15.649226  346330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:30:15.649270  346330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:30:15.663887  346330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:30:15.664735  346330 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-164377" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.665194  346330 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-164377" cluster setting kubeconfig missing "default-k8s-diff-port-164377" context setting]
	I1124 09:30:15.666227  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.668691  346330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:30:15.680140  346330 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 09:30:15.680180  346330 kubeadm.go:602] duration metric: took 30.938163ms to restartPrimaryControlPlane
	I1124 09:30:15.680189  346330 kubeadm.go:403] duration metric: took 106.868907ms to StartCluster
	I1124 09:30:15.680202  346330 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.680258  346330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.681803  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.682046  346330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:15.682240  346330 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:15.682422  346330 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682447  346330 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682456  346330 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:30:15.682523  346330 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682554  346330 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682573  346330 addons.go:248] addon dashboard should already be in state true
	I1124 09:30:15.682612  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.682679  346330 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:15.682721  346330 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682735  346330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-164377"
	I1124 09:30:15.683004  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683176  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683179  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.683615  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683830  346330 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:15.685123  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:15.719127  346330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:30:15.719844  346330 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.719950  346330 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:30:15.720006  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.720557  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.721200  346330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:30:15.721225  346330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:30:15.722276  346330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.722291  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:30:15.722368  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.722497  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:30:15.722505  346330 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:30:15.722550  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.760598  346330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:15.760694  346330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:30:15.760791  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.761102  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.768663  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.809271  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.913227  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:15.931974  346330 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:30:15.958496  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:30:15.958523  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:30:15.961696  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.982191  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:30:15.982217  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:30:15.984451  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:16.003515  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:30:16.003603  346330 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:30:16.025926  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:30:16.025949  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:30:16.049115  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:30:16.049141  346330 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:30:16.070292  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:30:16.070316  346330 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:30:16.087883  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:30:16.087909  346330 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:30:16.107837  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:30:16.107859  346330 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:30:16.130726  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:30:16.130811  346330 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:30:16.152225  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:30:14.455914  350567 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:30:14.456108  350567 start.go:159] libmachine.API.Create for "embed-certs-673346" (driver="docker")
	I1124 09:30:14.456138  350567 client.go:173] LocalClient.Create starting
	I1124 09:30:14.456212  350567 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem
	I1124 09:30:14.456244  350567 main.go:143] libmachine: Decoding PEM data...
	I1124 09:30:14.456264  350567 main.go:143] libmachine: Parsing certificate...
	I1124 09:30:14.456310  350567 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem
	I1124 09:30:14.456355  350567 main.go:143] libmachine: Decoding PEM data...
	I1124 09:30:14.456379  350567 main.go:143] libmachine: Parsing certificate...
	I1124 09:30:14.456793  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:30:14.478660  350567 cli_runner.go:211] docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:30:14.478755  350567 network_create.go:284] running [docker network inspect embed-certs-673346] to gather additional debugging logs...
	I1124 09:30:14.478786  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346
	W1124 09:30:14.498235  350567 cli_runner.go:211] docker network inspect embed-certs-673346 returned with exit code 1
	I1124 09:30:14.498267  350567 network_create.go:287] error running [docker network inspect embed-certs-673346]: docker network inspect embed-certs-673346: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-673346 not found
	I1124 09:30:14.498281  350567 network_create.go:289] output of [docker network inspect embed-certs-673346]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-673346 not found
	
	** /stderr **
	I1124 09:30:14.498385  350567 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:14.520018  350567 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2543a3a5b30f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:09:61:f4:32:5e} reservation:<nil>}
	I1124 09:30:14.520793  350567 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c977c796f084 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:34:cc:6d:f9:2b} reservation:<nil>}
	I1124 09:30:14.521788  350567 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2994a163bb80 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:ca:61:f0:c2:2e} reservation:<nil>}
	I1124 09:30:14.522707  350567 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5a70}
	I1124 09:30:14.522732  350567 network_create.go:124] attempt to create docker network embed-certs-673346 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 09:30:14.522785  350567 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-673346 embed-certs-673346
	I1124 09:30:14.586516  350567 network_create.go:108] docker network embed-certs-673346 192.168.76.0/24 created
	I1124 09:30:14.586547  350567 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-673346" container
	I1124 09:30:14.586627  350567 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:30:14.610804  350567 cli_runner.go:164] Run: docker volume create embed-certs-673346 --label name.minikube.sigs.k8s.io=embed-certs-673346 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:30:14.632832  350567 oci.go:103] Successfully created a docker volume embed-certs-673346
	I1124 09:30:14.632925  350567 cli_runner.go:164] Run: docker run --rm --name embed-certs-673346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-673346 --entrypoint /usr/bin/test -v embed-certs-673346:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:30:15.090593  350567 oci.go:107] Successfully prepared a docker volume embed-certs-673346
	I1124 09:30:15.090677  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:15.090690  350567 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:30:15.090748  350567 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-673346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:30:18.723622  346330 node_ready.go:49] node "default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:18.723658  346330 node_ready.go:38] duration metric: took 2.791273581s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:30:18.723674  346330 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:30:18.723726  346330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:30:19.762798  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.801068095s)
	I1124 09:30:19.762854  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.778333862s)
	I1124 09:30:19.809952  346330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.086204016s)
	I1124 09:30:19.809990  346330 api_server.go:72] duration metric: took 4.127914679s to wait for apiserver process to appear ...
	I1124 09:30:19.809999  346330 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:30:19.810019  346330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:30:19.810840  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.65851014s)
	I1124 09:30:19.812854  346330 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-164377 addons enable metrics-server
	
	I1124 09:30:19.814608  346330 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 09:30:19.815981  346330 addons.go:530] duration metric: took 4.133745613s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 09:30:19.819288  346330 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:30:19.819490  346330 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:30:20.310801  346330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:30:20.318184  346330 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 09:30:20.320089  346330 api_server.go:141] control plane version: v1.34.2
	I1124 09:30:20.320238  346330 api_server.go:131] duration metric: took 510.229099ms to wait for apiserver health ...
	I1124 09:30:20.320485  346330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:30:20.328441  346330 system_pods.go:59] 8 kube-system pods found
	I1124 09:30:20.328478  346330 system_pods.go:61] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:20.328490  346330 system_pods.go:61] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:30:20.328498  346330 system_pods.go:61] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:30:20.328506  346330 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:30:20.328515  346330 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:30:20.328521  346330 system_pods.go:61] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:30:20.328529  346330 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:30:20.328534  346330 system_pods.go:61] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Running
	I1124 09:30:20.328541  346330 system_pods.go:74] duration metric: took 7.85104ms to wait for pod list to return data ...
	I1124 09:30:20.328554  346330 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:30:20.332981  346330 default_sa.go:45] found service account: "default"
	I1124 09:30:20.333009  346330 default_sa.go:55] duration metric: took 4.449084ms for default service account to be created ...
	I1124 09:30:20.333021  346330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:30:20.338641  346330 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:20.338682  346330 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:20.338698  346330 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:30:20.338709  346330 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:30:20.338718  346330 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:30:20.338727  346330 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:30:20.338734  346330 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:30:20.338741  346330 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:30:20.338747  346330 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Running
	I1124 09:30:20.338757  346330 system_pods.go:126] duration metric: took 5.728957ms to wait for k8s-apps to be running ...
	I1124 09:30:20.338767  346330 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:30:20.338820  346330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:20.357733  346330 system_svc.go:56] duration metric: took 18.956624ms WaitForService to wait for kubelet
	I1124 09:30:20.358599  346330 kubeadm.go:587] duration metric: took 4.676515085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:20.358629  346330 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:30:20.363231  346330 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:30:20.363257  346330 node_conditions.go:123] node cpu capacity is 8
	I1124 09:30:20.363289  346330 node_conditions.go:105] duration metric: took 4.654352ms to run NodePressure ...
	I1124 09:30:20.363303  346330 start.go:242] waiting for startup goroutines ...
	I1124 09:30:20.363313  346330 start.go:247] waiting for cluster config update ...
	I1124 09:30:20.363345  346330 start.go:256] writing updated cluster config ...
	I1124 09:30:20.363650  346330 ssh_runner.go:195] Run: rm -f paused
	I1124 09:30:20.369452  346330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:20.373717  346330 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:19.611672  350567 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-673346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.520871166s)
	I1124 09:30:19.611715  350567 kic.go:203] duration metric: took 4.521020447s to extract preloaded images to volume ...
	W1124 09:30:19.612119  350567 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:30:19.612200  350567 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:30:19.612273  350567 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:30:19.706294  350567 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-673346 --name embed-certs-673346 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-673346 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-673346 --network embed-certs-673346 --ip 192.168.76.2 --volume embed-certs-673346:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:30:20.123957  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Running}}
	I1124 09:30:20.146501  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:20.169846  350567 cli_runner.go:164] Run: docker exec embed-certs-673346 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:30:20.229570  350567 oci.go:144] the created container "embed-certs-673346" has a running status.
	I1124 09:30:20.229610  350567 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa...
	I1124 09:30:20.290959  350567 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:30:20.332257  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:20.366886  350567 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:30:20.366912  350567 kic_runner.go:114] Args: [docker exec --privileged embed-certs-673346 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:30:20.421029  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:20.448864  350567 machine.go:94] provisionDockerMachine start ...
	I1124 09:30:20.448975  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:20.471107  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:20.471475  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:20.471493  350567 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:30:20.472225  350567 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52462->127.0.0.1:33133: read: connection reset by peer
	I1124 09:30:23.653448  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-673346
	
	I1124 09:30:23.653510  350567 ubuntu.go:182] provisioning hostname "embed-certs-673346"
	I1124 09:30:23.653756  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:23.678607  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:23.678937  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:23.678958  350567 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-673346 && echo "embed-certs-673346" | sudo tee /etc/hostname
	I1124 09:30:23.850425  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-673346
	
	I1124 09:30:23.850503  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:23.874386  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:23.874730  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:23.874760  350567 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-673346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-673346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-673346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:30:24.034104  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:30:24.034135  350567 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 09:30:24.034160  350567 ubuntu.go:190] setting up certificates
	I1124 09:30:24.034174  350567 provision.go:84] configureAuth start
	I1124 09:30:24.034235  350567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:30:24.056481  350567 provision.go:143] copyHostCerts
	I1124 09:30:24.056552  350567 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem, removing ...
	I1124 09:30:24.056564  350567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem
	I1124 09:30:24.056628  350567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 09:30:24.056755  350567 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem, removing ...
	I1124 09:30:24.056763  350567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem
	I1124 09:30:24.056806  350567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 09:30:24.056918  350567 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem, removing ...
	I1124 09:30:24.056931  350567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem
	I1124 09:30:24.056973  350567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 09:30:24.057091  350567 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.embed-certs-673346 san=[127.0.0.1 192.168.76.2 embed-certs-673346 localhost minikube]
	I1124 09:30:24.206865  350567 provision.go:177] copyRemoteCerts
	I1124 09:30:24.206922  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:30:24.206955  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.226403  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	W1124 09:30:22.380052  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:24.380391  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:24.331162  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:30:24.354961  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:30:24.377647  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:30:24.400776  350567 provision.go:87] duration metric: took 366.587357ms to configureAuth
	I1124 09:30:24.400805  350567 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:30:24.400996  350567 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:24.401117  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.424078  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:24.424426  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:24.424457  350567 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:30:24.754396  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:30:24.754421  350567 machine.go:97] duration metric: took 4.30553632s to provisionDockerMachine
	I1124 09:30:24.754433  350567 client.go:176] duration metric: took 10.29828879s to LocalClient.Create
	I1124 09:30:24.754450  350567 start.go:167] duration metric: took 10.298341795s to libmachine.API.Create "embed-certs-673346"
	I1124 09:30:24.754459  350567 start.go:293] postStartSetup for "embed-certs-673346" (driver="docker")
	I1124 09:30:24.754471  350567 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:30:24.754538  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:30:24.754583  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.780786  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:24.896450  350567 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:30:24.900141  350567 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:30:24.900169  350567 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:30:24.900181  350567 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:30:24.900238  350567 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:30:24.900352  350567 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:30:24.900469  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:30:24.908686  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:24.930429  350567 start.go:296] duration metric: took 175.958432ms for postStartSetup
	I1124 09:30:24.930756  350567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:30:24.951946  350567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:30:24.952213  350567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:30:24.952254  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.971774  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:25.084790  350567 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:30:25.089773  350567 start.go:128] duration metric: took 10.635765016s to createHost
	I1124 09:30:25.089794  350567 start.go:83] releasing machines lock for "embed-certs-673346", held for 10.635883769s
	I1124 09:30:25.089855  350567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:30:25.107834  350567 ssh_runner.go:195] Run: cat /version.json
	I1124 09:30:25.107876  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:25.107876  350567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:30:25.107963  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:25.126027  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:25.127155  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:25.225482  350567 ssh_runner.go:195] Run: systemctl --version
	I1124 09:30:25.285543  350567 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:30:25.321857  350567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:30:25.327941  350567 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:30:25.328019  350567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:30:25.524839  350567 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:30:25.524863  350567 start.go:496] detecting cgroup driver to use...
	I1124 09:30:25.524891  350567 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:30:25.524934  350567 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:30:25.542024  350567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:30:25.555182  350567 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:30:25.555243  350567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:30:25.572649  350567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:30:25.594452  350567 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:30:25.688181  350567 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:30:25.780701  350567 docker.go:234] disabling docker service ...
	I1124 09:30:25.780765  350567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:30:25.801555  350567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:30:25.816167  350567 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:30:25.936230  350567 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:30:26.054601  350567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:30:26.071219  350567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:30:26.089974  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:26.264765  350567 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:30:26.264839  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.285104  350567 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:30:26.285169  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.296551  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.307239  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.318284  350567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:30:26.328483  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.339222  350567 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.356669  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.367765  350567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:30:26.377490  350567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:30:26.386986  350567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:26.502123  350567 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:30:26.740769  350567 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:30:26.740830  350567 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:30:26.745782  350567 start.go:564] Will wait 60s for crictl version
	I1124 09:30:26.745832  350567 ssh_runner.go:195] Run: which crictl
	I1124 09:30:26.750426  350567 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:30:26.783507  350567 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:30:26.783585  350567 ssh_runner.go:195] Run: crio --version
	I1124 09:30:26.821826  350567 ssh_runner.go:195] Run: crio --version
	I1124 09:30:26.866519  350567 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 09:30:26.868046  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:26.895427  350567 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:26.900350  350567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:26.913358  350567 kubeadm.go:884] updating cluster {Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:26.913735  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.099545  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.285631  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.447699  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:27.447838  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.627950  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.802057  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.982378  350567 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:28.023587  350567 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:28.023612  350567 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:30:28.023667  350567 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:28.057634  350567 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:28.057658  350567 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:28.057667  350567 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1124 09:30:28.057782  350567 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-673346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:28.057861  350567 ssh_runner.go:195] Run: crio config
	I1124 09:30:28.125113  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:28.125141  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:28.125163  350567 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:30:28.125194  350567 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-673346 NodeName:embed-certs-673346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:28.125384  350567 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-673346"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:28.125457  350567 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:30:28.136211  350567 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:28.136278  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:28.146766  350567 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 09:30:28.162970  350567 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:30:28.183026  350567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 09:30:28.199769  350567 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:28.204631  350567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:28.216670  350567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:28.333908  350567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:28.358960  350567 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346 for IP: 192.168.76.2
	I1124 09:30:28.358982  350567 certs.go:195] generating shared ca certs ...
	I1124 09:30:28.359000  350567 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.359152  350567 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:28.359204  350567 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:28.359216  350567 certs.go:257] generating profile certs ...
	I1124 09:30:28.359284  350567 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key
	I1124 09:30:28.359301  350567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.crt with IP's: []
	I1124 09:30:28.437471  350567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.crt ...
	I1124 09:30:28.437495  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.crt: {Name:mk8b7253b9b301c91d2672344892984576a60144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.437641  350567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key ...
	I1124 09:30:28.437654  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key: {Name:mk2a06bce20bfcf3fd65f78bc031396f7e03338b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.437728  350567 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844
	I1124 09:30:28.437742  350567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 09:30:28.481815  350567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844 ...
	I1124 09:30:28.481840  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844: {Name:mk5fa5046e27fe7d2f0e4475b095f002a239fd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.482010  350567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844 ...
	I1124 09:30:28.482030  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844: {Name:mkd08edea57155db981a087021feb4524402ea29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.482143  350567 certs.go:382] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt
	I1124 09:30:28.482230  350567 certs.go:386] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key
	I1124 09:30:28.482292  350567 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key
	I1124 09:30:28.482308  350567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt with IP's: []
	I1124 09:30:28.544080  350567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt ...
	I1124 09:30:28.544107  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt: {Name:mkfd4e68c065efc0731596098a6a75426ddfaab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.544288  350567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key ...
	I1124 09:30:28.544305  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key: {Name:mk2884332d5edfe59fc22312877e42be26c5e588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.544523  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:28.544565  350567 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:28.544576  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:28.544600  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:28.544632  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:28.544654  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:28.544696  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:28.545236  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:28.564379  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:28.581704  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:28.598963  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:28.616453  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 09:30:28.634084  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:30:28.650828  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:28.667240  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:30:28.683947  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:28.702546  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:28.721403  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:28.738492  350567 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:28.750548  350567 ssh_runner.go:195] Run: openssl version
	I1124 09:30:28.756295  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:28.764541  350567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:28.768172  350567 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:28.768220  350567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:28.802295  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:28.811614  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:28.819817  350567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:28.823382  350567 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:28.823443  350567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:28.856763  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:28.865271  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:28.873545  350567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:28.877598  350567 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:28.877655  350567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:28.919370  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:28.928260  350567 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:28.931896  350567 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:30:28.931943  350567 kubeadm.go:401] StartCluster: {Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:28.932015  350567 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:28.932059  350567 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:28.958681  350567 cri.go:89] found id: ""
	I1124 09:30:28.958744  350567 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:28.966642  350567 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:30:28.974403  350567 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:30:28.974471  350567 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:30:28.981638  350567 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:30:28.981659  350567 kubeadm.go:158] found existing configuration files:
	
	I1124 09:30:28.981689  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:30:28.989069  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:30:28.989126  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:30:28.996227  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:30:29.003603  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:30:29.003655  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:30:29.010559  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:30:29.017747  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:30:29.017791  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:30:29.024818  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:30:29.032072  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:30:29.032110  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:30:29.039184  350567 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:30:29.109731  350567 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:30:29.170264  350567 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 09:30:26.880546  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:29.379444  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:31.879625  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:34.379563  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:38.748399  350567 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:30:38.748506  350567 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:30:38.748626  350567 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:30:38.748685  350567 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:30:38.748714  350567 kubeadm.go:319] OS: Linux
	I1124 09:30:38.748760  350567 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:30:38.748798  350567 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:30:38.748841  350567 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:30:38.748881  350567 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:30:38.748952  350567 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:30:38.749042  350567 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:30:38.749116  350567 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:30:38.749159  350567 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:30:38.749218  350567 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:30:38.749302  350567 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:30:38.749395  350567 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:30:38.749449  350567 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:30:38.750906  350567 out.go:252]   - Generating certificates and keys ...
	I1124 09:30:38.750995  350567 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:30:38.751089  350567 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:30:38.751177  350567 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:30:38.751224  350567 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:30:38.751273  350567 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:30:38.751317  350567 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:30:38.751438  350567 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:30:38.751613  350567 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-673346 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 09:30:38.751694  350567 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:30:38.751864  350567 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-673346 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 09:30:38.751935  350567 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:30:38.752013  350567 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:30:38.752054  350567 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:30:38.752103  350567 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:30:38.752193  350567 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:30:38.752302  350567 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:30:38.752409  350567 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:30:38.752476  350567 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:30:38.752520  350567 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:30:38.752585  350567 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:30:38.752640  350567 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:30:38.753908  350567 out.go:252]   - Booting up control plane ...
	I1124 09:30:38.753982  350567 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:30:38.754048  350567 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:30:38.754101  350567 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:30:38.754214  350567 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:30:38.754351  350567 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:30:38.754483  350567 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:30:38.754595  350567 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:30:38.754657  350567 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:30:38.754803  350567 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:30:38.754931  350567 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:30:38.755022  350567 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.798528ms
	I1124 09:30:38.755160  350567 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:30:38.755241  350567 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 09:30:38.755362  350567 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:30:38.755437  350567 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:30:38.755504  350567 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.263591063s
	I1124 09:30:38.755565  350567 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.451899812s
	I1124 09:30:38.755620  350567 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002226153s
	I1124 09:30:38.755712  350567 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:30:38.755827  350567 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:30:38.755921  350567 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:30:38.756130  350567 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-673346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:30:38.756180  350567 kubeadm.go:319] [bootstrap-token] Using token: s5v8q1.i02i5m2whwuijtw1
	I1124 09:30:38.757350  350567 out.go:252]   - Configuring RBAC rules ...
	I1124 09:30:38.757460  350567 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:30:38.757561  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:30:38.757739  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:30:38.757875  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:30:38.758003  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:30:38.758127  350567 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:30:38.758258  350567 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:30:38.758326  350567 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:30:38.758411  350567 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:30:38.758419  350567 kubeadm.go:319] 
	I1124 09:30:38.758489  350567 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:30:38.758501  350567 kubeadm.go:319] 
	I1124 09:30:38.758566  350567 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:30:38.758571  350567 kubeadm.go:319] 
	I1124 09:30:38.758593  350567 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:30:38.758643  350567 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:30:38.758691  350567 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:30:38.758696  350567 kubeadm.go:319] 
	I1124 09:30:38.758770  350567 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:30:38.758783  350567 kubeadm.go:319] 
	I1124 09:30:38.758851  350567 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:30:38.758864  350567 kubeadm.go:319] 
	I1124 09:30:38.758912  350567 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:30:38.758992  350567 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:30:38.759090  350567 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:30:38.759098  350567 kubeadm.go:319] 
	I1124 09:30:38.759200  350567 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:30:38.759305  350567 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:30:38.759315  350567 kubeadm.go:319] 
	I1124 09:30:38.759405  350567 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s5v8q1.i02i5m2whwuijtw1 \
	I1124 09:30:38.759526  350567 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 \
	I1124 09:30:38.759550  350567 kubeadm.go:319] 	--control-plane 
	I1124 09:30:38.759555  350567 kubeadm.go:319] 
	I1124 09:30:38.759678  350567 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:30:38.759688  350567 kubeadm.go:319] 
	I1124 09:30:38.759796  350567 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s5v8q1.i02i5m2whwuijtw1 \
	I1124 09:30:38.759949  350567 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 
	I1124 09:30:38.759964  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:38.759972  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:38.761479  350567 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:30:38.762425  350567 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:30:38.766592  350567 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 09:30:38.766608  350567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:30:38.779390  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:30:38.984251  350567 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:30:38.984311  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:38.984390  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-673346 minikube.k8s.io/updated_at=2025_11_24T09_30_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=embed-certs-673346 minikube.k8s.io/primary=true
	I1124 09:30:38.993988  350567 ops.go:34] apiserver oom_adj: -16
	I1124 09:30:39.059843  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1124 09:30:36.879966  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:38.880135  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:41.379898  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:39.560572  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:40.059940  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:40.560087  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:41.060193  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:41.560831  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:42.059938  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:42.560813  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:43.059947  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:43.560941  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:43.633307  350567 kubeadm.go:1114] duration metric: took 4.64904392s to wait for elevateKubeSystemPrivileges
	I1124 09:30:43.633356  350567 kubeadm.go:403] duration metric: took 14.701415807s to StartCluster
	I1124 09:30:43.633377  350567 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:43.633432  350567 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:43.634680  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:43.634890  350567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:30:43.634909  350567 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:43.634960  350567 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-673346"
	I1124 09:30:43.634893  350567 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:43.634981  350567 addons.go:70] Setting default-storageclass=true in profile "embed-certs-673346"
	I1124 09:30:43.635001  350567 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-673346"
	I1124 09:30:43.634977  350567 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-673346"
	I1124 09:30:43.635127  350567 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:30:43.635080  350567 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:43.635321  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:43.635592  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:43.637167  350567 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:43.638326  350567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:43.662561  350567 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:30:43.663142  350567 addons.go:239] Setting addon default-storageclass=true in "embed-certs-673346"
	I1124 09:30:43.663183  350567 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:30:43.663673  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:43.663998  350567 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:43.664015  350567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:30:43.664062  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:43.691153  350567 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:43.691177  350567 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:30:43.691228  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:43.691586  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:43.714047  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:43.730614  350567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:30:43.771446  350567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:43.810568  350567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:43.828496  350567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:43.934529  350567 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 09:30:43.935556  350567 node_ready.go:35] waiting up to 6m0s for node "embed-certs-673346" to be "Ready" ...
	I1124 09:30:44.140071  350567 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:30:44.141027  350567 addons.go:530] duration metric: took 506.116331ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1124 09:30:43.881243  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:46.378931  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:44.438080  350567 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-673346" context rescaled to 1 replicas
	W1124 09:30:45.938541  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	W1124 09:30:47.939500  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	W1124 09:30:48.879272  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:50.879407  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:52.380637  346330 pod_ready.go:94] pod "coredns-66bc5c9577-gn9zx" is "Ready"
	I1124 09:30:52.380665  346330 pod_ready.go:86] duration metric: took 32.006923448s for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.383181  346330 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.386812  346330 pod_ready.go:94] pod "etcd-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:52.386836  346330 pod_ready.go:86] duration metric: took 3.636091ms for pod "etcd-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.388809  346330 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.392121  346330 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:52.392137  346330 pod_ready.go:86] duration metric: took 3.312038ms for pod "kube-apiserver-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.393861  346330 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.577642  346330 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:52.577666  346330 pod_ready.go:86] duration metric: took 183.789548ms for pod "kube-controller-manager-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.777477  346330 pod_ready.go:83] waiting for pod "kube-proxy-2vm2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.177318  346330 pod_ready.go:94] pod "kube-proxy-2vm2s" is "Ready"
	I1124 09:30:53.177358  346330 pod_ready.go:86] duration metric: took 399.857272ms for pod "kube-proxy-2vm2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.377464  346330 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.777288  346330 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:53.777312  346330 pod_ready.go:86] duration metric: took 399.822555ms for pod "kube-scheduler-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.777323  346330 pod_ready.go:40] duration metric: took 33.407838856s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:53.820139  346330 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:30:53.822727  346330 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-164377" cluster and "default" namespace by default
	W1124 09:30:49.939645  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	W1124 09:30:52.439029  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	I1124 09:30:54.938888  350567 node_ready.go:49] node "embed-certs-673346" is "Ready"
	I1124 09:30:54.938913  350567 node_ready.go:38] duration metric: took 11.003315497s for node "embed-certs-673346" to be "Ready" ...
	I1124 09:30:54.938926  350567 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:30:54.938977  350567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:30:54.950812  350567 api_server.go:72] duration metric: took 11.315807298s to wait for apiserver process to appear ...
	I1124 09:30:54.950847  350567 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:30:54.950868  350567 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:30:54.956132  350567 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 09:30:54.957173  350567 api_server.go:141] control plane version: v1.34.2
	I1124 09:30:54.957194  350567 api_server.go:131] duration metric: took 6.340368ms to wait for apiserver health ...
	I1124 09:30:54.957201  350567 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:30:54.960442  350567 system_pods.go:59] 8 kube-system pods found
	I1124 09:30:54.960470  350567 system_pods.go:61] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:54.960475  350567 system_pods.go:61] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:54.960481  350567 system_pods.go:61] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:54.960484  350567 system_pods.go:61] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:54.960489  350567 system_pods.go:61] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:54.960492  350567 system_pods.go:61] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:54.960495  350567 system_pods.go:61] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:54.960503  350567 system_pods.go:61] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:54.960507  350567 system_pods.go:74] duration metric: took 3.301271ms to wait for pod list to return data ...
	I1124 09:30:54.960515  350567 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:30:54.962760  350567 default_sa.go:45] found service account: "default"
	I1124 09:30:54.962777  350567 default_sa.go:55] duration metric: took 2.256858ms for default service account to be created ...
	I1124 09:30:54.962784  350567 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:30:54.967150  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:54.967177  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:54.967185  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:54.967193  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:54.967199  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:54.967205  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:54.967216  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:54.967226  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:54.967234  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:54.967264  350567 retry.go:31] will retry after 253.013546ms: missing components: kube-dns
	I1124 09:30:55.224543  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:55.224572  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:55.224578  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:55.224584  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:55.224589  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:55.224595  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:55.224599  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:55.224604  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:55.224619  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:55.224640  350567 retry.go:31] will retry after 278.082193ms: missing components: kube-dns
	I1124 09:30:55.506580  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:55.506609  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:55.506618  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:55.506625  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:55.506630  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:55.506636  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:55.506641  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:55.506646  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:55.506661  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:55.506688  350567 retry.go:31] will retry after 307.004154ms: missing components: kube-dns
	I1124 09:30:55.818537  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:55.818854  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:55.818862  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:55.818868  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:55.818872  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:55.818877  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:55.818881  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:55.818885  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:55.818890  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:55.818908  350567 retry.go:31] will retry after 519.354598ms: missing components: kube-dns
	I1124 09:30:56.341803  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:56.341831  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Running
	I1124 09:30:56.341837  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:56.341841  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:56.341845  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:56.341849  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:56.341853  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:56.341856  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:56.341861  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Running
	I1124 09:30:56.341873  350567 system_pods.go:126] duration metric: took 1.379080603s to wait for k8s-apps to be running ...
	I1124 09:30:56.341884  350567 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:30:56.341932  350567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:56.354699  350567 system_svc.go:56] duration metric: took 12.804001ms WaitForService to wait for kubelet
	I1124 09:30:56.354739  350567 kubeadm.go:587] duration metric: took 12.719737164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:56.354759  350567 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:30:56.357637  350567 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:30:56.357683  350567 node_conditions.go:123] node cpu capacity is 8
	I1124 09:30:56.357699  350567 node_conditions.go:105] duration metric: took 2.935054ms to run NodePressure ...
	I1124 09:30:56.357714  350567 start.go:242] waiting for startup goroutines ...
	I1124 09:30:56.357729  350567 start.go:247] waiting for cluster config update ...
	I1124 09:30:56.357742  350567 start.go:256] writing updated cluster config ...
	I1124 09:30:56.358065  350567 ssh_runner.go:195] Run: rm -f paused
	I1124 09:30:56.361813  350567 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:56.364971  350567 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vgl62" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.368743  350567 pod_ready.go:94] pod "coredns-66bc5c9577-vgl62" is "Ready"
	I1124 09:30:56.368763  350567 pod_ready.go:86] duration metric: took 3.773601ms for pod "coredns-66bc5c9577-vgl62" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.370575  350567 pod_ready.go:83] waiting for pod "etcd-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.374046  350567 pod_ready.go:94] pod "etcd-embed-certs-673346" is "Ready"
	I1124 09:30:56.374066  350567 pod_ready.go:86] duration metric: took 3.473581ms for pod "etcd-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.375786  350567 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.379022  350567 pod_ready.go:94] pod "kube-apiserver-embed-certs-673346" is "Ready"
	I1124 09:30:56.379041  350567 pod_ready.go:86] duration metric: took 3.236137ms for pod "kube-apiserver-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.380667  350567 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.765494  350567 pod_ready.go:94] pod "kube-controller-manager-embed-certs-673346" is "Ready"
	I1124 09:30:56.765522  350567 pod_ready.go:86] duration metric: took 384.833249ms for pod "kube-controller-manager-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.965563  350567 pod_ready.go:83] waiting for pod "kube-proxy-m54gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.365568  350567 pod_ready.go:94] pod "kube-proxy-m54gs" is "Ready"
	I1124 09:30:57.365594  350567 pod_ready.go:86] duration metric: took 400.007869ms for pod "kube-proxy-m54gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.566275  350567 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.966130  350567 pod_ready.go:94] pod "kube-scheduler-embed-certs-673346" is "Ready"
	I1124 09:30:57.966154  350567 pod_ready.go:86] duration metric: took 399.858862ms for pod "kube-scheduler-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.966168  350567 pod_ready.go:40] duration metric: took 1.604321652s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:58.012793  350567 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:30:58.014766  350567 out.go:179] * Done! kubectl is now configured to use "embed-certs-673346" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:30:55 embed-certs-673346 crio[777]: time="2025-11-24T09:30:55.150899383Z" level=info msg="Starting container: 9c2d5fcab603b5ddac84ac6484de90f1df8b5210742b95f5e5bb5164b18fbfa9" id=5fdf4f39-bfaa-4c45-bafc-10e5893da31b name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:30:55 embed-certs-673346 crio[777]: time="2025-11-24T09:30:55.152838349Z" level=info msg="Started container" PID=1829 containerID=9c2d5fcab603b5ddac84ac6484de90f1df8b5210742b95f5e5bb5164b18fbfa9 description=kube-system/coredns-66bc5c9577-vgl62/coredns id=5fdf4f39-bfaa-4c45-bafc-10e5893da31b name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb9c79f0782a69eb81a9e319c3e362a7d8a06118e6d3849f75b833fac0813078
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.460767763Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a8e9e7a8-578e-493c-9f21-ae6b1edde6cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.460845861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.466080153Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0ab13e7e1254702cfced7e3492c5df633a281c93e6d6a83498ea097490658e58 UID:4646ee42-5d8b-47af-825e-b809a988472f NetNS:/var/run/netns/56146fd1-91a2-4461-8e92-acd194f9638d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ab08}] Aliases:map[]}"
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.466138212Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.47637433Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0ab13e7e1254702cfced7e3492c5df633a281c93e6d6a83498ea097490658e58 UID:4646ee42-5d8b-47af-825e-b809a988472f NetNS:/var/run/netns/56146fd1-91a2-4461-8e92-acd194f9638d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ab08}] Aliases:map[]}"
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.476550821Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.477523835Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.478549088Z" level=info msg="Ran pod sandbox 0ab13e7e1254702cfced7e3492c5df633a281c93e6d6a83498ea097490658e58 with infra container: default/busybox/POD" id=a8e9e7a8-578e-493c-9f21-ae6b1edde6cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.479864287Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=723ef3a5-6e69-475b-8fcc-5c5914cdb1a3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.479999119Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=723ef3a5-6e69-475b-8fcc-5c5914cdb1a3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.480048294Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=723ef3a5-6e69-475b-8fcc-5c5914cdb1a3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.480796197Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a9b15854-e03f-480f-9a4e-1e8a5082979e name=/runtime.v1.ImageService/PullImage
	Nov 24 09:30:58 embed-certs-673346 crio[777]: time="2025-11-24T09:30:58.482570093Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:30:59 embed-certs-673346 crio[777]: time="2025-11-24T09:30:59.767230114Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=a9b15854-e03f-480f-9a4e-1e8a5082979e name=/runtime.v1.ImageService/PullImage
	Nov 24 09:30:59 embed-certs-673346 crio[777]: time="2025-11-24T09:30:59.767989222Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d4203541-458d-4e72-b24a-26152a3edb15 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:59 embed-certs-673346 crio[777]: time="2025-11-24T09:30:59.769265587Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6492f309-fadc-4fc8-9948-93ebc319c5c3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:30:59 embed-certs-673346 crio[777]: time="2025-11-24T09:30:59.77236887Z" level=info msg="Creating container: default/busybox/busybox" id=cb5c7690-2179-4ff4-b0f2-0da7352894f1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:59 embed-certs-673346 crio[777]: time="2025-11-24T09:30:59.77251188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:59 embed-certs-673346 crio[777]: time="2025-11-24T09:30:59.776193657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:59 embed-certs-673346 crio[777]: time="2025-11-24T09:30:59.776601354Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:59 embed-certs-673346 crio[777]: time="2025-11-24T09:30:59.808159492Z" level=info msg="Created container ee5f7625cc438d97dbe7701284b0f7e8795b15207b0fcdecac536a4bd4f2aeaa: default/busybox/busybox" id=cb5c7690-2179-4ff4-b0f2-0da7352894f1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:59 embed-certs-673346 crio[777]: time="2025-11-24T09:30:59.808748832Z" level=info msg="Starting container: ee5f7625cc438d97dbe7701284b0f7e8795b15207b0fcdecac536a4bd4f2aeaa" id=56f1f176-93fb-4943-87f3-3c3bd0e40453 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:30:59 embed-certs-673346 crio[777]: time="2025-11-24T09:30:59.810642517Z" level=info msg="Started container" PID=1904 containerID=ee5f7625cc438d97dbe7701284b0f7e8795b15207b0fcdecac536a4bd4f2aeaa description=default/busybox/busybox id=56f1f176-93fb-4943-87f3-3c3bd0e40453 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ab13e7e1254702cfced7e3492c5df633a281c93e6d6a83498ea097490658e58
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ee5f7625cc438       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   0ab13e7e12547       busybox                                      default
	9c2d5fcab603b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   eb9c79f0782a6       coredns-66bc5c9577-vgl62                     kube-system
	6d730f04d8ab1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   edebc6f5f7d9b       storage-provisioner                          kube-system
	df248947883c7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   e7ccf753e2b92       kindnet-zm85n                                kube-system
	37bf911d626cd       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   0c9f1ad5ffc40       kube-proxy-m54gs                             kube-system
	e12b81fdcd961       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   77d322b5e7a46       etcd-embed-certs-673346                      kube-system
	9df5fb611d066       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   9b5e58d7472e4       kube-controller-manager-embed-certs-673346   kube-system
	a6e70ed113516       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   0449732cba674       kube-apiserver-embed-certs-673346            kube-system
	b72e877336eca       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   fddbefbc05946       kube-scheduler-embed-certs-673346            kube-system
	
	
	==> coredns [9c2d5fcab603b5ddac84ac6484de90f1df8b5210742b95f5e5bb5164b18fbfa9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43924 - 44366 "HINFO IN 818866514312010575.9183790295708531258. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.032551807s
	
	
	==> describe nodes <==
	Name:               embed-certs-673346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-673346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=embed-certs-673346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_30_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:30:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-673346
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:30:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:30:54 +0000   Mon, 24 Nov 2025 09:30:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:30:54 +0000   Mon, 24 Nov 2025 09:30:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:30:54 +0000   Mon, 24 Nov 2025 09:30:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:30:54 +0000   Mon, 24 Nov 2025 09:30:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-673346
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d639e906-b423-4ee2-aa7b-1de85e945d2c
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-vgl62                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-673346                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-zm85n                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-673346             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-673346    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-m54gs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-673346             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node embed-certs-673346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node embed-certs-673346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node embed-certs-673346 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node embed-certs-673346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node embed-certs-673346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node embed-certs-673346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node embed-certs-673346 event: Registered Node embed-certs-673346 in Controller
	  Normal  NodeReady                13s                kubelet          Node embed-certs-673346 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [e12b81fdcd961e8f94eb2220b9528dcad4a4359bb5a78ef3829bb04e9ad0f0ae] <==
	{"level":"warn","ts":"2025-11-24T09:30:34.898193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.906703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.914465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.922486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.930093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.937543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.944904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.951380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.958425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.966461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.973880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.980412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.988873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:34.995819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.002487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.010553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.018178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.025384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.031831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.039686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.047923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.061043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.067630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.075199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:30:35.129001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48568","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:31:07 up  1:13,  0 user,  load average: 2.74, 3.25, 2.26
	Linux embed-certs-673346 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [df248947883c7e47a5f2f210198e2f9dfd7702008021439f656cb82f7b3ce474] <==
	I1124 09:30:44.406374       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:30:44.406636       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 09:30:44.406786       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:30:44.406803       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:30:44.406827       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:30:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:30:44.608233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:30:44.608318       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:30:44.608356       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:30:44.700928       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:30:45.200815       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:30:45.200854       1 metrics.go:72] Registering metrics
	I1124 09:30:45.200936       1 controller.go:711] "Syncing nftables rules"
	I1124 09:30:54.608628       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:30:54.608707       1 main.go:301] handling current node
	I1124 09:31:04.607847       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:31:04.607884       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a6e70ed1135162e6d2296ca51d667249b255972b3329a10093edfdb0ed111083] <==
	I1124 09:30:35.624842       1 policy_source.go:240] refreshing policies
	I1124 09:30:35.625476       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:30:35.629632       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:30:35.630170       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 09:30:35.634381       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:30:35.637940       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:30:35.638315       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:30:36.528300       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:30:36.531906       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:30:36.531924       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:30:36.982752       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:30:37.019462       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:30:37.132206       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:30:37.138317       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 09:30:37.139388       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:30:37.143743       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:30:37.798373       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:30:38.147397       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:30:38.158794       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:30:38.165902       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:30:43.452645       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:30:43.456068       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:30:43.803615       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:30:43.851738       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 09:31:06.249553       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:40098: use of closed network connection
	
	
	==> kube-controller-manager [9df5fb611d06667c68a87d88b9c01271cb43badbf093386a2426b2746849a651] <==
	I1124 09:30:42.759795       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:30:42.797016       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:30:42.797042       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 09:30:42.797388       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 09:30:42.797461       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 09:30:42.797531       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-673346"
	I1124 09:30:42.797542       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 09:30:42.797629       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 09:30:42.797972       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 09:30:42.798074       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 09:30:42.798267       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:30:42.798365       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:30:42.798427       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:30:42.798453       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 09:30:42.798525       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 09:30:42.798553       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 09:30:42.798840       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 09:30:42.799282       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:30:42.801249       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 09:30:42.801450       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:30:42.802828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 09:30:42.808127       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:30:42.817396       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:30:42.818726       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:30:57.799601       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [37bf911d626cd4c08cf1cfde6dd87dd394e293a0322c58f17fd194edb340901d] <==
	I1124 09:30:44.271118       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:30:44.334228       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:30:44.434501       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:30:44.434550       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 09:30:44.434672       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:30:44.454902       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:30:44.454964       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:30:44.460028       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:30:44.460393       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:30:44.460418       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:30:44.461648       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:30:44.461675       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:30:44.461743       1 config.go:309] "Starting node config controller"
	I1124 09:30:44.461740       1 config.go:200] "Starting service config controller"
	I1124 09:30:44.461767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:30:44.461776       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:30:44.461774       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:30:44.461820       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:30:44.461845       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:30:44.562395       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:30:44.562418       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:30:44.562418       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b72e877336eca82111a9450970ffe54d6b94b9c2232cbf76705c8e909ff3e553] <==
	E1124 09:30:35.594092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:30:35.594133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:30:35.594175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:30:35.594265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:30:35.594266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:30:35.594776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:30:35.594830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:30:35.594898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:30:35.594923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:30:35.594943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:30:35.595020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:30:35.595044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:30:35.595488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:30:35.595661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:30:36.402812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:30:36.407927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:30:36.438277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:30:36.456608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:30:36.493002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:30:36.648703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:30:36.746270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:30:36.746376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:30:36.748160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:30:36.759177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1124 09:30:37.191185       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:30:39 embed-certs-673346 kubelet[1297]: I1124 09:30:39.030253    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-673346" podStartSLOduration=1.030234733 podStartE2EDuration="1.030234733s" podCreationTimestamp="2025-11-24 09:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:30:39.03023138 +0000 UTC m=+1.119910773" watchObservedRunningTime="2025-11-24 09:30:39.030234733 +0000 UTC m=+1.119914127"
	Nov 24 09:30:39 embed-certs-673346 kubelet[1297]: I1124 09:30:39.039246    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-673346" podStartSLOduration=1.039224701 podStartE2EDuration="1.039224701s" podCreationTimestamp="2025-11-24 09:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:30:39.03920275 +0000 UTC m=+1.128882142" watchObservedRunningTime="2025-11-24 09:30:39.039224701 +0000 UTC m=+1.128904095"
	Nov 24 09:30:39 embed-certs-673346 kubelet[1297]: I1124 09:30:39.048758    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-673346" podStartSLOduration=1.048739185 podStartE2EDuration="1.048739185s" podCreationTimestamp="2025-11-24 09:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:30:39.048576659 +0000 UTC m=+1.138256053" watchObservedRunningTime="2025-11-24 09:30:39.048739185 +0000 UTC m=+1.138418572"
	Nov 24 09:30:39 embed-certs-673346 kubelet[1297]: I1124 09:30:39.058476    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-673346" podStartSLOduration=1.058454552 podStartE2EDuration="1.058454552s" podCreationTimestamp="2025-11-24 09:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:30:39.058327504 +0000 UTC m=+1.148006916" watchObservedRunningTime="2025-11-24 09:30:39.058454552 +0000 UTC m=+1.148133946"
	Nov 24 09:30:42 embed-certs-673346 kubelet[1297]: I1124 09:30:42.827020    1297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 09:30:42 embed-certs-673346 kubelet[1297]: I1124 09:30:42.827686    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:30:43 embed-certs-673346 kubelet[1297]: I1124 09:30:43.925733    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7-cni-cfg\") pod \"kindnet-zm85n\" (UID: \"8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7\") " pod="kube-system/kindnet-zm85n"
	Nov 24 09:30:43 embed-certs-673346 kubelet[1297]: I1124 09:30:43.925791    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7-lib-modules\") pod \"kindnet-zm85n\" (UID: \"8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7\") " pod="kube-system/kindnet-zm85n"
	Nov 24 09:30:43 embed-certs-673346 kubelet[1297]: I1124 09:30:43.925819    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/280a5343-2e8e-4bfa-8589-49693afaef95-kube-proxy\") pod \"kube-proxy-m54gs\" (UID: \"280a5343-2e8e-4bfa-8589-49693afaef95\") " pod="kube-system/kube-proxy-m54gs"
	Nov 24 09:30:43 embed-certs-673346 kubelet[1297]: I1124 09:30:43.925905    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/280a5343-2e8e-4bfa-8589-49693afaef95-xtables-lock\") pod \"kube-proxy-m54gs\" (UID: \"280a5343-2e8e-4bfa-8589-49693afaef95\") " pod="kube-system/kube-proxy-m54gs"
	Nov 24 09:30:43 embed-certs-673346 kubelet[1297]: I1124 09:30:43.925957    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/280a5343-2e8e-4bfa-8589-49693afaef95-lib-modules\") pod \"kube-proxy-m54gs\" (UID: \"280a5343-2e8e-4bfa-8589-49693afaef95\") " pod="kube-system/kube-proxy-m54gs"
	Nov 24 09:30:43 embed-certs-673346 kubelet[1297]: I1124 09:30:43.925984    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7-xtables-lock\") pod \"kindnet-zm85n\" (UID: \"8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7\") " pod="kube-system/kindnet-zm85n"
	Nov 24 09:30:43 embed-certs-673346 kubelet[1297]: I1124 09:30:43.926005    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xdkb\" (UniqueName: \"kubernetes.io/projected/8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7-kube-api-access-9xdkb\") pod \"kindnet-zm85n\" (UID: \"8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7\") " pod="kube-system/kindnet-zm85n"
	Nov 24 09:30:43 embed-certs-673346 kubelet[1297]: I1124 09:30:43.926083    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5g5\" (UniqueName: \"kubernetes.io/projected/280a5343-2e8e-4bfa-8589-49693afaef95-kube-api-access-kb5g5\") pod \"kube-proxy-m54gs\" (UID: \"280a5343-2e8e-4bfa-8589-49693afaef95\") " pod="kube-system/kube-proxy-m54gs"
	Nov 24 09:30:45 embed-certs-673346 kubelet[1297]: I1124 09:30:45.036365    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m54gs" podStartSLOduration=2.036328753 podStartE2EDuration="2.036328753s" podCreationTimestamp="2025-11-24 09:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:30:45.036278242 +0000 UTC m=+7.125957636" watchObservedRunningTime="2025-11-24 09:30:45.036328753 +0000 UTC m=+7.126008148"
	Nov 24 09:30:45 embed-certs-673346 kubelet[1297]: I1124 09:30:45.053475    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zm85n" podStartSLOduration=2.053459305 podStartE2EDuration="2.053459305s" podCreationTimestamp="2025-11-24 09:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:30:45.053370971 +0000 UTC m=+7.143050364" watchObservedRunningTime="2025-11-24 09:30:45.053459305 +0000 UTC m=+7.143138699"
	Nov 24 09:30:54 embed-certs-673346 kubelet[1297]: I1124 09:30:54.772898    1297 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 09:30:54 embed-certs-673346 kubelet[1297]: I1124 09:30:54.911696    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w587l\" (UniqueName: \"kubernetes.io/projected/f54b959f-374a-4003-809e-9077f9384e37-kube-api-access-w587l\") pod \"storage-provisioner\" (UID: \"f54b959f-374a-4003-809e-9077f9384e37\") " pod="kube-system/storage-provisioner"
	Nov 24 09:30:54 embed-certs-673346 kubelet[1297]: I1124 09:30:54.911754    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2f79272-9bc2-421a-8b98-02af7ee3ad09-config-volume\") pod \"coredns-66bc5c9577-vgl62\" (UID: \"a2f79272-9bc2-421a-8b98-02af7ee3ad09\") " pod="kube-system/coredns-66bc5c9577-vgl62"
	Nov 24 09:30:54 embed-certs-673346 kubelet[1297]: I1124 09:30:54.911791    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f54b959f-374a-4003-809e-9077f9384e37-tmp\") pod \"storage-provisioner\" (UID: \"f54b959f-374a-4003-809e-9077f9384e37\") " pod="kube-system/storage-provisioner"
	Nov 24 09:30:54 embed-certs-673346 kubelet[1297]: I1124 09:30:54.911856    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp58z\" (UniqueName: \"kubernetes.io/projected/a2f79272-9bc2-421a-8b98-02af7ee3ad09-kube-api-access-qp58z\") pod \"coredns-66bc5c9577-vgl62\" (UID: \"a2f79272-9bc2-421a-8b98-02af7ee3ad09\") " pod="kube-system/coredns-66bc5c9577-vgl62"
	Nov 24 09:30:56 embed-certs-673346 kubelet[1297]: I1124 09:30:56.060612    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.060591048 podStartE2EDuration="12.060591048s" podCreationTimestamp="2025-11-24 09:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:30:56.060404883 +0000 UTC m=+18.150084277" watchObservedRunningTime="2025-11-24 09:30:56.060591048 +0000 UTC m=+18.150270442"
	Nov 24 09:30:56 embed-certs-673346 kubelet[1297]: I1124 09:30:56.069669    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vgl62" podStartSLOduration=12.069647849 podStartE2EDuration="12.069647849s" podCreationTimestamp="2025-11-24 09:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:30:56.069434618 +0000 UTC m=+18.159114011" watchObservedRunningTime="2025-11-24 09:30:56.069647849 +0000 UTC m=+18.159327243"
	Nov 24 09:30:58 embed-certs-673346 kubelet[1297]: I1124 09:30:58.234428    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbjf9\" (UniqueName: \"kubernetes.io/projected/4646ee42-5d8b-47af-825e-b809a988472f-kube-api-access-lbjf9\") pod \"busybox\" (UID: \"4646ee42-5d8b-47af-825e-b809a988472f\") " pod="default/busybox"
	Nov 24 09:31:00 embed-certs-673346 kubelet[1297]: I1124 09:31:00.073678    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.785256233 podStartE2EDuration="2.073657413s" podCreationTimestamp="2025-11-24 09:30:58 +0000 UTC" firstStartedPulling="2025-11-24 09:30:58.480393026 +0000 UTC m=+20.570072404" lastFinishedPulling="2025-11-24 09:30:59.768794211 +0000 UTC m=+21.858473584" observedRunningTime="2025-11-24 09:31:00.073237433 +0000 UTC m=+22.162916827" watchObservedRunningTime="2025-11-24 09:31:00.073657413 +0000 UTC m=+22.163336807"
	
	
	==> storage-provisioner [6d730f04d8ab170e644ca8f6a85d5aee0dda6f07d87db18544afbd3628f34b28] <==
	I1124 09:30:55.164499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:30:55.172850       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:30:55.172909       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:30:55.175092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:55.180058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:30:55.180219       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:30:55.180399       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-673346_68147ad7-10ea-48ed-ae09-4388ce93e6a8!
	I1124 09:30:55.180362       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24e163fb-f470-4eb3-b56c-97d0ebe5b8c9", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-673346_68147ad7-10ea-48ed-ae09-4388ce93e6a8 became leader
	W1124 09:30:55.182605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:55.186452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:30:55.281558       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-673346_68147ad7-10ea-48ed-ae09-4388ce93e6a8!
	W1124 09:30:57.190737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:57.194757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:59.197832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:59.203267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:01.206933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:01.210897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:03.213904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:03.218974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:05.222424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:05.226136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:07.229714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:07.234453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-673346 -n embed-certs-673346
E1124 09:31:08.098877    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-673346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-164377 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-164377 --alsologtostderr -v=1: exit status 80 (1.750378801s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-164377 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:31:07.042122  358930 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:31:07.042405  358930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:31:07.042415  358930 out.go:374] Setting ErrFile to fd 2...
	I1124 09:31:07.042419  358930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:31:07.042621  358930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:31:07.042873  358930 out.go:368] Setting JSON to false
	I1124 09:31:07.042891  358930 mustload.go:66] Loading cluster: default-k8s-diff-port-164377
	I1124 09:31:07.043230  358930 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:31:07.043606  358930 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:31:07.063634  358930 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:31:07.063913  358930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:31:07.137226  358930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-24 09:31:07.125760811 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:31:07.138067  358930 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-164377 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 09:31:07.139845  358930 out.go:179] * Pausing node default-k8s-diff-port-164377 ... 
	I1124 09:31:07.141092  358930 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:31:07.141456  358930 ssh_runner.go:195] Run: systemctl --version
	I1124 09:31:07.141510  358930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:31:07.161101  358930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:31:07.261906  358930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:31:07.283290  358930 pause.go:52] kubelet running: true
	I1124 09:31:07.283398  358930 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:31:07.461110  358930 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:31:07.461190  358930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:31:07.533436  358930 cri.go:89] found id: "2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6"
	I1124 09:31:07.533456  358930 cri.go:89] found id: "8fd30725243cf74c22d0fc9ddf7c963a305c1829d64d4bfeaa81eec4f11cb627"
	I1124 09:31:07.533462  358930 cri.go:89] found id: "d1d380122c4828f19d29ada5371570b902bc1915f5aa17fbda0cb5bb589a355f"
	I1124 09:31:07.533466  358930 cri.go:89] found id: "35c10215bee00d6e5d470f828ed5ac25b6fbaadd21e5d6bb0919a63b77ec7273"
	I1124 09:31:07.533470  358930 cri.go:89] found id: "cd5ff3cd6ed0a4412d7185ce32dcfa542107181ea6781701296539e88ec8c7f1"
	I1124 09:31:07.533475  358930 cri.go:89] found id: "dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8"
	I1124 09:31:07.533480  358930 cri.go:89] found id: "892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99"
	I1124 09:31:07.533484  358930 cri.go:89] found id: "4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe"
	I1124 09:31:07.533489  358930 cri.go:89] found id: "4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7"
	I1124 09:31:07.533498  358930 cri.go:89] found id: "f4d186acf5471d5830a3d965311af6068f0c69ebb7cd8a9a5515a4e387886672"
	I1124 09:31:07.533506  358930 cri.go:89] found id: "185c940805d0b7d87ece5c74083744172e4a580f1090813538cc221bc51f08ca"
	I1124 09:31:07.533509  358930 cri.go:89] found id: ""
	I1124 09:31:07.533568  358930 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:31:07.545367  358930 retry.go:31] will retry after 253.369145ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:31:07Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:31:07.799833  358930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:31:07.813206  358930 pause.go:52] kubelet running: false
	I1124 09:31:07.813276  358930 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:31:07.971169  358930 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:31:07.971256  358930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:31:08.053105  358930 cri.go:89] found id: "2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6"
	I1124 09:31:08.053134  358930 cri.go:89] found id: "8fd30725243cf74c22d0fc9ddf7c963a305c1829d64d4bfeaa81eec4f11cb627"
	I1124 09:31:08.053141  358930 cri.go:89] found id: "d1d380122c4828f19d29ada5371570b902bc1915f5aa17fbda0cb5bb589a355f"
	I1124 09:31:08.053146  358930 cri.go:89] found id: "35c10215bee00d6e5d470f828ed5ac25b6fbaadd21e5d6bb0919a63b77ec7273"
	I1124 09:31:08.053152  358930 cri.go:89] found id: "cd5ff3cd6ed0a4412d7185ce32dcfa542107181ea6781701296539e88ec8c7f1"
	I1124 09:31:08.053159  358930 cri.go:89] found id: "dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8"
	I1124 09:31:08.053163  358930 cri.go:89] found id: "892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99"
	I1124 09:31:08.053168  358930 cri.go:89] found id: "4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe"
	I1124 09:31:08.053173  358930 cri.go:89] found id: "4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7"
	I1124 09:31:08.053181  358930 cri.go:89] found id: "f4d186acf5471d5830a3d965311af6068f0c69ebb7cd8a9a5515a4e387886672"
	I1124 09:31:08.053191  358930 cri.go:89] found id: "185c940805d0b7d87ece5c74083744172e4a580f1090813538cc221bc51f08ca"
	I1124 09:31:08.053195  358930 cri.go:89] found id: ""
	I1124 09:31:08.053236  358930 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:31:08.068232  358930 retry.go:31] will retry after 405.793223ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:31:08Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:31:08.474920  358930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:31:08.489517  358930 pause.go:52] kubelet running: false
	I1124 09:31:08.489576  358930 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:31:08.635248  358930 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:31:08.635350  358930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:31:08.703494  358930 cri.go:89] found id: "2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6"
	I1124 09:31:08.703520  358930 cri.go:89] found id: "8fd30725243cf74c22d0fc9ddf7c963a305c1829d64d4bfeaa81eec4f11cb627"
	I1124 09:31:08.703526  358930 cri.go:89] found id: "d1d380122c4828f19d29ada5371570b902bc1915f5aa17fbda0cb5bb589a355f"
	I1124 09:31:08.703530  358930 cri.go:89] found id: "35c10215bee00d6e5d470f828ed5ac25b6fbaadd21e5d6bb0919a63b77ec7273"
	I1124 09:31:08.703534  358930 cri.go:89] found id: "cd5ff3cd6ed0a4412d7185ce32dcfa542107181ea6781701296539e88ec8c7f1"
	I1124 09:31:08.703548  358930 cri.go:89] found id: "dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8"
	I1124 09:31:08.703553  358930 cri.go:89] found id: "892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99"
	I1124 09:31:08.703557  358930 cri.go:89] found id: "4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe"
	I1124 09:31:08.703562  358930 cri.go:89] found id: "4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7"
	I1124 09:31:08.703581  358930 cri.go:89] found id: "f4d186acf5471d5830a3d965311af6068f0c69ebb7cd8a9a5515a4e387886672"
	I1124 09:31:08.703589  358930 cri.go:89] found id: "185c940805d0b7d87ece5c74083744172e4a580f1090813538cc221bc51f08ca"
	I1124 09:31:08.703594  358930 cri.go:89] found id: ""
	I1124 09:31:08.703644  358930 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:31:08.717974  358930 out.go:203] 
	W1124 09:31:08.719157  358930 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:31:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:31:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:31:08.719174  358930 out.go:285] * 
	* 
	W1124 09:31:08.725063  358930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:31:08.726210  358930 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-164377 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-164377
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-164377:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c",
	        "Created": "2025-11-24T09:28:58.752077739Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 346670,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:30:06.731691108Z",
	            "FinishedAt": "2025-11-24T09:30:05.751729354Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/hostname",
	        "HostsPath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/hosts",
	        "LogPath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c-json.log",
	        "Name": "/default-k8s-diff-port-164377",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-164377:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-164377",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c",
	                "LowerDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-164377",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-164377/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-164377",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-164377",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-164377",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8800361c775545b4d965033a039d0f80fa3415b8b0e2f5e9328b13e6b4b027bd",
	            "SandboxKey": "/var/run/docker/netns/8800361c7755",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-164377": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1e00630149587d24459445d686d13d40af862a7ea70db024de88f2ab8bf6b09",
	                    "EndpointID": "66c2f64a23a6af3af90c2548023247b954e65813ae18cfc1f617ea6a329de5a4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6e:81:91:90:bb:ed",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-164377",
	                        "83d485128258"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377: exit status 2 (333.639722ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-164377 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-164377 logs -n 25: (1.121068163s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-164377 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p newest-cni-639420 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-639420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ newest-cni-639420 image list --format=json                                                                                                                                                                                                           │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p newest-cni-639420 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ no-preload-938348 image list --format=json                                                                                                                                                                                                           │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p no-preload-938348 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p newest-cni-639420                                                                                                                                                                                                                                 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p no-preload-938348                                                                                                                                                                                                                                 │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p newest-cni-639420                                                                                                                                                                                                                                 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p no-preload-938348                                                                                                                                                                                                                                 │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ default-k8s-diff-port-164377 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-673346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-164377 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-673346 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:30:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:30:14.256245  350567 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:14.256374  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256383  350567 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:14.256387  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256590  350567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:14.257068  350567 out.go:368] Setting JSON to false
	I1124 09:30:14.258256  350567 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4360,"bootTime":1763972254,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:30:14.258310  350567 start.go:143] virtualization: kvm guest
	I1124 09:30:14.260266  350567 out.go:179] * [embed-certs-673346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:30:14.261445  350567 notify.go:221] Checking for updates...
	I1124 09:30:14.261485  350567 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:30:14.262753  350567 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:30:14.264083  350567 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:14.265432  350567 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:30:14.266629  350567 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:30:14.268064  350567 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:30:14.269699  350567 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:14.269849  350567 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.269945  350567 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.270033  350567 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:30:14.295962  350567 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:30:14.296062  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.353929  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.34315637 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.354017  350567 docker.go:319] overlay module found
	I1124 09:30:14.355843  350567 out.go:179] * Using the docker driver based on user configuration
	I1124 09:30:14.357036  350567 start.go:309] selected driver: docker
	I1124 09:30:14.357055  350567 start.go:927] validating driver "docker" against <nil>
	I1124 09:30:14.357071  350567 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:30:14.357913  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.421846  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.410748585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.422058  350567 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:30:14.422268  350567 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:14.423788  350567 out.go:179] * Using Docker driver with root privileges
	I1124 09:30:14.424821  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.424879  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.424889  350567 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:30:14.424949  350567 start.go:353] cluster config:
	{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:14.426196  350567 out.go:179] * Starting "embed-certs-673346" primary control-plane node in "embed-certs-673346" cluster
	I1124 09:30:14.427568  350567 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:30:14.428764  350567 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:30:14.430011  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:14.430039  350567 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:30:14.430057  350567 cache.go:65] Caching tarball of preloaded images
	I1124 09:30:14.430101  350567 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:30:14.430158  350567 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:30:14.430171  350567 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:30:14.430275  350567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:30:14.430300  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json: {Name:mk0422b133bc5e40a804c0d52d08ba9c0b2ed1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.453692  350567 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:30:14.453709  350567 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:30:14.453740  350567 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:30:14.453787  350567 start.go:360] acquireMachinesLock for embed-certs-673346: {Name:mke42f7eda6495a6293833a93353c50b3546b267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:30:14.453896  350567 start.go:364] duration metric: took 91.14µs to acquireMachinesLock for "embed-certs-673346"
	I1124 09:30:14.453926  350567 start.go:93] Provisioning new machine with config: &{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:14.453996  350567 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:30:13.147546  346330 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-164377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:13.167771  346330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:13.172050  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:13.182388  346330 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:13.182659  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.335407  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.491838  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.647119  346330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:13.647243  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.846371  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.028841  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.344499  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.385375  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.385396  346330 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:30:14.385438  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.415659  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.415679  346330 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:14.415687  346330 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1124 09:30:14.415796  346330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-164377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:14.415855  346330 ssh_runner.go:195] Run: crio config
	I1124 09:30:14.467415  346330 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.467440  346330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.467457  346330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:30:14.467485  346330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-164377 NodeName:default-k8s-diff-port-164377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:14.467665  346330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-164377"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:14.467740  346330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:30:14.477297  346330 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:14.477386  346330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:14.486666  346330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 09:30:14.501581  346330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:30:14.516622  346330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 09:30:14.531939  346330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:14.536699  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:14.551687  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:14.653461  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:14.689043  346330 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377 for IP: 192.168.85.2
	I1124 09:30:14.689069  346330 certs.go:195] generating shared ca certs ...
	I1124 09:30:14.689088  346330 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.689257  346330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:14.689322  346330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:14.689350  346330 certs.go:257] generating profile certs ...
	I1124 09:30:14.689449  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/client.key
	I1124 09:30:14.689523  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key.5d8312b5
	I1124 09:30:14.689584  346330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key
	I1124 09:30:14.689713  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:14.689756  346330 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:14.689770  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:14.689805  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:14.689846  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:14.689877  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:14.689936  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:14.690834  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:14.713491  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:14.733133  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:14.755304  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:14.781644  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 09:30:14.807149  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:30:14.826555  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:14.849289  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:30:14.868866  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:14.900899  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:14.927265  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:14.951934  346330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:14.968305  346330 ssh_runner.go:195] Run: openssl version
	I1124 09:30:14.977188  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:14.988887  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993793  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993849  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:15.044783  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:15.062885  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:15.073450  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078558  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078611  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.125021  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:15.134555  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:15.145840  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150712  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150766  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.193031  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:15.203009  346330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:15.208170  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:30:15.268668  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:30:15.330529  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:30:15.386730  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:30:15.450702  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:30:15.510222  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:30:15.573346  346330 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:15.573548  346330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:15.573633  346330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:15.617052  346330 cri.go:89] found id: "dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8"
	I1124 09:30:15.617070  346330 cri.go:89] found id: "892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99"
	I1124 09:30:15.617076  346330 cri.go:89] found id: "4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe"
	I1124 09:30:15.617088  346330 cri.go:89] found id: "4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7"
	I1124 09:30:15.617092  346330 cri.go:89] found id: ""
	I1124 09:30:15.617135  346330 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:30:15.636984  346330 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:15Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:15.638440  346330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:15.649204  346330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:30:15.649226  346330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:30:15.649270  346330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:30:15.663887  346330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:30:15.664735  346330 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-164377" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.665194  346330 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-164377" cluster setting kubeconfig missing "default-k8s-diff-port-164377" context setting]
	I1124 09:30:15.666227  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.668691  346330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:30:15.680140  346330 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 09:30:15.680180  346330 kubeadm.go:602] duration metric: took 30.938163ms to restartPrimaryControlPlane
	I1124 09:30:15.680189  346330 kubeadm.go:403] duration metric: took 106.868907ms to StartCluster
	I1124 09:30:15.680202  346330 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.680258  346330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.681803  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.682046  346330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:15.682240  346330 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:15.682422  346330 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682447  346330 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682456  346330 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:30:15.682523  346330 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682554  346330 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682573  346330 addons.go:248] addon dashboard should already be in state true
	I1124 09:30:15.682612  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.682679  346330 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:15.682721  346330 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682735  346330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-164377"
	I1124 09:30:15.683004  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683176  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683179  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.683615  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683830  346330 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:15.685123  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:15.719127  346330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:30:15.719844  346330 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.719950  346330 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:30:15.720006  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.720557  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.721200  346330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:30:15.721225  346330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:30:15.722276  346330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.722291  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:30:15.722368  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.722497  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:30:15.722505  346330 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:30:15.722550  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.760598  346330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:15.760694  346330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:30:15.760791  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.761102  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.768663  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.809271  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.913227  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:15.931974  346330 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:30:15.958496  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:30:15.958523  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:30:15.961696  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.982191  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:30:15.982217  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:30:15.984451  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:16.003515  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:30:16.003603  346330 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:30:16.025926  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:30:16.025949  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:30:16.049115  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:30:16.049141  346330 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:30:16.070292  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:30:16.070316  346330 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:30:16.087883  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:30:16.087909  346330 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:30:16.107837  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:30:16.107859  346330 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:30:16.130726  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:30:16.130811  346330 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:30:16.152225  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:30:14.455914  350567 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:30:14.456108  350567 start.go:159] libmachine.API.Create for "embed-certs-673346" (driver="docker")
	I1124 09:30:14.456138  350567 client.go:173] LocalClient.Create starting
	I1124 09:30:14.456212  350567 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem
	I1124 09:30:14.456244  350567 main.go:143] libmachine: Decoding PEM data...
	I1124 09:30:14.456264  350567 main.go:143] libmachine: Parsing certificate...
	I1124 09:30:14.456310  350567 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem
	I1124 09:30:14.456355  350567 main.go:143] libmachine: Decoding PEM data...
	I1124 09:30:14.456379  350567 main.go:143] libmachine: Parsing certificate...
	I1124 09:30:14.456793  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:30:14.478660  350567 cli_runner.go:211] docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:30:14.478755  350567 network_create.go:284] running [docker network inspect embed-certs-673346] to gather additional debugging logs...
	I1124 09:30:14.478786  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346
	W1124 09:30:14.498235  350567 cli_runner.go:211] docker network inspect embed-certs-673346 returned with exit code 1
	I1124 09:30:14.498267  350567 network_create.go:287] error running [docker network inspect embed-certs-673346]: docker network inspect embed-certs-673346: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-673346 not found
	I1124 09:30:14.498281  350567 network_create.go:289] output of [docker network inspect embed-certs-673346]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-673346 not found
	
	** /stderr **
	I1124 09:30:14.498385  350567 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:14.520018  350567 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2543a3a5b30f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:09:61:f4:32:5e} reservation:<nil>}
	I1124 09:30:14.520793  350567 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c977c796f084 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:34:cc:6d:f9:2b} reservation:<nil>}
	I1124 09:30:14.521788  350567 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2994a163bb80 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:ca:61:f0:c2:2e} reservation:<nil>}
	I1124 09:30:14.522707  350567 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5a70}
	I1124 09:30:14.522732  350567 network_create.go:124] attempt to create docker network embed-certs-673346 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 09:30:14.522785  350567 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-673346 embed-certs-673346
	I1124 09:30:14.586516  350567 network_create.go:108] docker network embed-certs-673346 192.168.76.0/24 created
	I1124 09:30:14.586547  350567 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-673346" container
	I1124 09:30:14.586627  350567 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:30:14.610804  350567 cli_runner.go:164] Run: docker volume create embed-certs-673346 --label name.minikube.sigs.k8s.io=embed-certs-673346 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:30:14.632832  350567 oci.go:103] Successfully created a docker volume embed-certs-673346
	I1124 09:30:14.632925  350567 cli_runner.go:164] Run: docker run --rm --name embed-certs-673346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-673346 --entrypoint /usr/bin/test -v embed-certs-673346:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:30:15.090593  350567 oci.go:107] Successfully prepared a docker volume embed-certs-673346
	I1124 09:30:15.090677  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:15.090690  350567 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:30:15.090748  350567 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-673346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:30:18.723622  346330 node_ready.go:49] node "default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:18.723658  346330 node_ready.go:38] duration metric: took 2.791273581s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:30:18.723674  346330 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:30:18.723726  346330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:30:19.762798  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.801068095s)
	I1124 09:30:19.762854  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.778333862s)
	I1124 09:30:19.809952  346330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.086204016s)
	I1124 09:30:19.809990  346330 api_server.go:72] duration metric: took 4.127914679s to wait for apiserver process to appear ...
	I1124 09:30:19.809999  346330 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:30:19.810019  346330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:30:19.810840  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.65851014s)
	I1124 09:30:19.812854  346330 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-164377 addons enable metrics-server
	
	I1124 09:30:19.814608  346330 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 09:30:19.815981  346330 addons.go:530] duration metric: took 4.133745613s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 09:30:19.819288  346330 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:30:19.819490  346330 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:30:20.310801  346330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:30:20.318184  346330 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 09:30:20.320089  346330 api_server.go:141] control plane version: v1.34.2
	I1124 09:30:20.320238  346330 api_server.go:131] duration metric: took 510.229099ms to wait for apiserver health ...
	I1124 09:30:20.320485  346330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:30:20.328441  346330 system_pods.go:59] 8 kube-system pods found
	I1124 09:30:20.328478  346330 system_pods.go:61] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:20.328490  346330 system_pods.go:61] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:30:20.328498  346330 system_pods.go:61] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:30:20.328506  346330 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:30:20.328515  346330 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:30:20.328521  346330 system_pods.go:61] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:30:20.328529  346330 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:30:20.328534  346330 system_pods.go:61] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Running
	I1124 09:30:20.328541  346330 system_pods.go:74] duration metric: took 7.85104ms to wait for pod list to return data ...
	I1124 09:30:20.328554  346330 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:30:20.332981  346330 default_sa.go:45] found service account: "default"
	I1124 09:30:20.333009  346330 default_sa.go:55] duration metric: took 4.449084ms for default service account to be created ...
	I1124 09:30:20.333021  346330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:30:20.338641  346330 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:20.338682  346330 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:20.338698  346330 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:30:20.338709  346330 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:30:20.338718  346330 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:30:20.338727  346330 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:30:20.338734  346330 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:30:20.338741  346330 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:30:20.338747  346330 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Running
	I1124 09:30:20.338757  346330 system_pods.go:126] duration metric: took 5.728957ms to wait for k8s-apps to be running ...
	I1124 09:30:20.338767  346330 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:30:20.338820  346330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:20.357733  346330 system_svc.go:56] duration metric: took 18.956624ms WaitForService to wait for kubelet
	I1124 09:30:20.358599  346330 kubeadm.go:587] duration metric: took 4.676515085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:20.358629  346330 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:30:20.363231  346330 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:30:20.363257  346330 node_conditions.go:123] node cpu capacity is 8
	I1124 09:30:20.363289  346330 node_conditions.go:105] duration metric: took 4.654352ms to run NodePressure ...
	I1124 09:30:20.363303  346330 start.go:242] waiting for startup goroutines ...
	I1124 09:30:20.363313  346330 start.go:247] waiting for cluster config update ...
	I1124 09:30:20.363345  346330 start.go:256] writing updated cluster config ...
	I1124 09:30:20.363650  346330 ssh_runner.go:195] Run: rm -f paused
	I1124 09:30:20.369452  346330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:20.373717  346330 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:19.611672  350567 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-673346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.520871166s)
	I1124 09:30:19.611715  350567 kic.go:203] duration metric: took 4.521020447s to extract preloaded images to volume ...
	W1124 09:30:19.612119  350567 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:30:19.612200  350567 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:30:19.612273  350567 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:30:19.706294  350567 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-673346 --name embed-certs-673346 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-673346 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-673346 --network embed-certs-673346 --ip 192.168.76.2 --volume embed-certs-673346:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:30:20.123957  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Running}}
	I1124 09:30:20.146501  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:20.169846  350567 cli_runner.go:164] Run: docker exec embed-certs-673346 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:30:20.229570  350567 oci.go:144] the created container "embed-certs-673346" has a running status.
	I1124 09:30:20.229610  350567 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa...
	I1124 09:30:20.290959  350567 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:30:20.332257  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:20.366886  350567 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:30:20.366912  350567 kic_runner.go:114] Args: [docker exec --privileged embed-certs-673346 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:30:20.421029  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:20.448864  350567 machine.go:94] provisionDockerMachine start ...
	I1124 09:30:20.448975  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:20.471107  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:20.471475  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:20.471493  350567 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:30:20.472225  350567 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52462->127.0.0.1:33133: read: connection reset by peer
	I1124 09:30:23.653448  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-673346
	
	I1124 09:30:23.653510  350567 ubuntu.go:182] provisioning hostname "embed-certs-673346"
	I1124 09:30:23.653756  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:23.678607  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:23.678937  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:23.678958  350567 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-673346 && echo "embed-certs-673346" | sudo tee /etc/hostname
	I1124 09:30:23.850425  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-673346
	
	I1124 09:30:23.850503  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:23.874386  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:23.874730  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:23.874760  350567 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-673346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-673346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-673346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:30:24.034104  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:30:24.034135  350567 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 09:30:24.034160  350567 ubuntu.go:190] setting up certificates
	I1124 09:30:24.034174  350567 provision.go:84] configureAuth start
	I1124 09:30:24.034235  350567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:30:24.056481  350567 provision.go:143] copyHostCerts
	I1124 09:30:24.056552  350567 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem, removing ...
	I1124 09:30:24.056564  350567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem
	I1124 09:30:24.056628  350567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 09:30:24.056755  350567 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem, removing ...
	I1124 09:30:24.056763  350567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem
	I1124 09:30:24.056806  350567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 09:30:24.056918  350567 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem, removing ...
	I1124 09:30:24.056931  350567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem
	I1124 09:30:24.056973  350567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 09:30:24.057091  350567 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.embed-certs-673346 san=[127.0.0.1 192.168.76.2 embed-certs-673346 localhost minikube]
	I1124 09:30:24.206865  350567 provision.go:177] copyRemoteCerts
	I1124 09:30:24.206922  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:30:24.206955  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.226403  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	W1124 09:30:22.380052  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:24.380391  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:24.331162  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:30:24.354961  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:30:24.377647  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:30:24.400776  350567 provision.go:87] duration metric: took 366.587357ms to configureAuth
	I1124 09:30:24.400805  350567 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:30:24.400996  350567 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:24.401117  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.424078  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:24.424426  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:24.424457  350567 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:30:24.754396  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:30:24.754421  350567 machine.go:97] duration metric: took 4.30553632s to provisionDockerMachine
	I1124 09:30:24.754433  350567 client.go:176] duration metric: took 10.29828879s to LocalClient.Create
	I1124 09:30:24.754450  350567 start.go:167] duration metric: took 10.298341795s to libmachine.API.Create "embed-certs-673346"
	I1124 09:30:24.754459  350567 start.go:293] postStartSetup for "embed-certs-673346" (driver="docker")
	I1124 09:30:24.754471  350567 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:30:24.754538  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:30:24.754583  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.780786  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:24.896450  350567 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:30:24.900141  350567 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:30:24.900169  350567 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:30:24.900181  350567 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:30:24.900238  350567 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:30:24.900352  350567 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:30:24.900469  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:30:24.908686  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:24.930429  350567 start.go:296] duration metric: took 175.958432ms for postStartSetup
	I1124 09:30:24.930756  350567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:30:24.951946  350567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:30:24.952213  350567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:30:24.952254  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.971774  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:25.084790  350567 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:30:25.089773  350567 start.go:128] duration metric: took 10.635765016s to createHost
	I1124 09:30:25.089794  350567 start.go:83] releasing machines lock for "embed-certs-673346", held for 10.635883769s
	I1124 09:30:25.089855  350567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:30:25.107834  350567 ssh_runner.go:195] Run: cat /version.json
	I1124 09:30:25.107876  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:25.107876  350567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:30:25.107963  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:25.126027  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:25.127155  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:25.225482  350567 ssh_runner.go:195] Run: systemctl --version
	I1124 09:30:25.285543  350567 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:30:25.321857  350567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:30:25.327941  350567 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:30:25.328019  350567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:30:25.524839  350567 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:30:25.524863  350567 start.go:496] detecting cgroup driver to use...
	I1124 09:30:25.524891  350567 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:30:25.524934  350567 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:30:25.542024  350567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:30:25.555182  350567 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:30:25.555243  350567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:30:25.572649  350567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:30:25.594452  350567 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:30:25.688181  350567 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:30:25.780701  350567 docker.go:234] disabling docker service ...
	I1124 09:30:25.780765  350567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:30:25.801555  350567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:30:25.816167  350567 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:30:25.936230  350567 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:30:26.054601  350567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:30:26.071219  350567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:30:26.089974  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:26.264765  350567 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:30:26.264839  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.285104  350567 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:30:26.285169  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.296551  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.307239  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.318284  350567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:30:26.328483  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.339222  350567 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.356669  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.367765  350567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:30:26.377490  350567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:30:26.386986  350567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:26.502123  350567 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:30:26.740769  350567 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:30:26.740830  350567 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:30:26.745782  350567 start.go:564] Will wait 60s for crictl version
	I1124 09:30:26.745832  350567 ssh_runner.go:195] Run: which crictl
	I1124 09:30:26.750426  350567 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:30:26.783507  350567 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:30:26.783585  350567 ssh_runner.go:195] Run: crio --version
	I1124 09:30:26.821826  350567 ssh_runner.go:195] Run: crio --version
	I1124 09:30:26.866519  350567 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 09:30:26.868046  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:26.895427  350567 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:26.900350  350567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:26.913358  350567 kubeadm.go:884] updating cluster {Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:26.913735  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.099545  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.285631  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.447699  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:27.447838  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.627950  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.802057  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.982378  350567 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:28.023587  350567 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:28.023612  350567 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:30:28.023667  350567 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:28.057634  350567 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:28.057658  350567 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:28.057667  350567 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1124 09:30:28.057782  350567 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-673346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:28.057861  350567 ssh_runner.go:195] Run: crio config
	I1124 09:30:28.125113  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:28.125141  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:28.125163  350567 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:30:28.125194  350567 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-673346 NodeName:embed-certs-673346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:28.125384  350567 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-673346"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:28.125457  350567 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:30:28.136211  350567 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:28.136278  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:28.146766  350567 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 09:30:28.162970  350567 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:30:28.183026  350567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 09:30:28.199769  350567 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:28.204631  350567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:28.216670  350567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:28.333908  350567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:28.358960  350567 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346 for IP: 192.168.76.2
	I1124 09:30:28.358982  350567 certs.go:195] generating shared ca certs ...
	I1124 09:30:28.359000  350567 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.359152  350567 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:28.359204  350567 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:28.359216  350567 certs.go:257] generating profile certs ...
	I1124 09:30:28.359284  350567 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key
	I1124 09:30:28.359301  350567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.crt with IP's: []
	I1124 09:30:28.437471  350567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.crt ...
	I1124 09:30:28.437495  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.crt: {Name:mk8b7253b9b301c91d2672344892984576a60144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.437641  350567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key ...
	I1124 09:30:28.437654  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key: {Name:mk2a06bce20bfcf3fd65f78bc031396f7e03338b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.437728  350567 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844
	I1124 09:30:28.437742  350567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 09:30:28.481815  350567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844 ...
	I1124 09:30:28.481840  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844: {Name:mk5fa5046e27fe7d2f0e4475b095f002a239fd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.482010  350567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844 ...
	I1124 09:30:28.482030  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844: {Name:mkd08edea57155db981a087021feb4524402ea29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.482143  350567 certs.go:382] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt
	I1124 09:30:28.482230  350567 certs.go:386] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key
	I1124 09:30:28.482292  350567 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key
	I1124 09:30:28.482308  350567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt with IP's: []
	I1124 09:30:28.544080  350567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt ...
	I1124 09:30:28.544107  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt: {Name:mkfd4e68c065efc0731596098a6a75426ddfaab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.544288  350567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key ...
	I1124 09:30:28.544305  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key: {Name:mk2884332d5edfe59fc22312877e42be26c5e588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.544523  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:28.544565  350567 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:28.544576  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:28.544600  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:28.544632  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:28.544654  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:28.544696  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:28.545236  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:28.564379  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:28.581704  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:28.598963  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:28.616453  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 09:30:28.634084  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:30:28.650828  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:28.667240  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:30:28.683947  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:28.702546  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:28.721403  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:28.738492  350567 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:28.750548  350567 ssh_runner.go:195] Run: openssl version
	I1124 09:30:28.756295  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:28.764541  350567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:28.768172  350567 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:28.768220  350567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:28.802295  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:28.811614  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:28.819817  350567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:28.823382  350567 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:28.823443  350567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:28.856763  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:28.865271  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:28.873545  350567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:28.877598  350567 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:28.877655  350567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:28.919370  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:28.928260  350567 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:28.931896  350567 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:30:28.931943  350567 kubeadm.go:401] StartCluster: {Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:28.932015  350567 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:28.932059  350567 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:28.958681  350567 cri.go:89] found id: ""
	I1124 09:30:28.958744  350567 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:28.966642  350567 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:30:28.974403  350567 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:30:28.974471  350567 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:30:28.981638  350567 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:30:28.981659  350567 kubeadm.go:158] found existing configuration files:
	
	I1124 09:30:28.981689  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:30:28.989069  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:30:28.989126  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:30:28.996227  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:30:29.003603  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:30:29.003655  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:30:29.010559  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:30:29.017747  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:30:29.017791  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:30:29.024818  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:30:29.032072  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:30:29.032110  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:30:29.039184  350567 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:30:29.109731  350567 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:30:29.170264  350567 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 09:30:26.880546  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:29.379444  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:31.879625  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:34.379563  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:38.748399  350567 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:30:38.748506  350567 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:30:38.748626  350567 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:30:38.748685  350567 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:30:38.748714  350567 kubeadm.go:319] OS: Linux
	I1124 09:30:38.748760  350567 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:30:38.748798  350567 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:30:38.748841  350567 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:30:38.748881  350567 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:30:38.748952  350567 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:30:38.749042  350567 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:30:38.749116  350567 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:30:38.749159  350567 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:30:38.749218  350567 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:30:38.749302  350567 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:30:38.749395  350567 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:30:38.749449  350567 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:30:38.750906  350567 out.go:252]   - Generating certificates and keys ...
	I1124 09:30:38.750995  350567 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:30:38.751089  350567 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:30:38.751177  350567 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:30:38.751224  350567 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:30:38.751273  350567 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:30:38.751317  350567 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:30:38.751438  350567 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:30:38.751613  350567 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-673346 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 09:30:38.751694  350567 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:30:38.751864  350567 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-673346 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 09:30:38.751935  350567 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:30:38.752013  350567 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:30:38.752054  350567 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:30:38.752103  350567 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:30:38.752193  350567 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:30:38.752302  350567 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:30:38.752409  350567 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:30:38.752476  350567 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:30:38.752520  350567 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:30:38.752585  350567 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:30:38.752640  350567 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:30:38.753908  350567 out.go:252]   - Booting up control plane ...
	I1124 09:30:38.753982  350567 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:30:38.754048  350567 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:30:38.754101  350567 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:30:38.754214  350567 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:30:38.754351  350567 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:30:38.754483  350567 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:30:38.754595  350567 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:30:38.754657  350567 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:30:38.754803  350567 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:30:38.754931  350567 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:30:38.755022  350567 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.798528ms
	I1124 09:30:38.755160  350567 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:30:38.755241  350567 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 09:30:38.755362  350567 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:30:38.755437  350567 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:30:38.755504  350567 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.263591063s
	I1124 09:30:38.755565  350567 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.451899812s
	I1124 09:30:38.755620  350567 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002226153s
	I1124 09:30:38.755712  350567 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:30:38.755827  350567 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:30:38.755921  350567 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:30:38.756130  350567 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-673346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:30:38.756180  350567 kubeadm.go:319] [bootstrap-token] Using token: s5v8q1.i02i5m2whwuijtw1
	I1124 09:30:38.757350  350567 out.go:252]   - Configuring RBAC rules ...
	I1124 09:30:38.757460  350567 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:30:38.757561  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:30:38.757739  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:30:38.757875  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:30:38.758003  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:30:38.758127  350567 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:30:38.758258  350567 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:30:38.758326  350567 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:30:38.758411  350567 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:30:38.758419  350567 kubeadm.go:319] 
	I1124 09:30:38.758489  350567 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:30:38.758501  350567 kubeadm.go:319] 
	I1124 09:30:38.758566  350567 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:30:38.758571  350567 kubeadm.go:319] 
	I1124 09:30:38.758593  350567 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:30:38.758643  350567 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:30:38.758691  350567 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:30:38.758696  350567 kubeadm.go:319] 
	I1124 09:30:38.758770  350567 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:30:38.758783  350567 kubeadm.go:319] 
	I1124 09:30:38.758851  350567 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:30:38.758864  350567 kubeadm.go:319] 
	I1124 09:30:38.758912  350567 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:30:38.758992  350567 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:30:38.759090  350567 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:30:38.759098  350567 kubeadm.go:319] 
	I1124 09:30:38.759200  350567 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:30:38.759305  350567 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:30:38.759315  350567 kubeadm.go:319] 
	I1124 09:30:38.759405  350567 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s5v8q1.i02i5m2whwuijtw1 \
	I1124 09:30:38.759526  350567 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 \
	I1124 09:30:38.759550  350567 kubeadm.go:319] 	--control-plane 
	I1124 09:30:38.759555  350567 kubeadm.go:319] 
	I1124 09:30:38.759678  350567 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:30:38.759688  350567 kubeadm.go:319] 
	I1124 09:30:38.759796  350567 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s5v8q1.i02i5m2whwuijtw1 \
	I1124 09:30:38.759949  350567 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 
	I1124 09:30:38.759964  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:38.759972  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:38.761479  350567 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:30:38.762425  350567 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:30:38.766592  350567 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 09:30:38.766608  350567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:30:38.779390  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:30:38.984251  350567 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:30:38.984311  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:38.984390  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-673346 minikube.k8s.io/updated_at=2025_11_24T09_30_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=embed-certs-673346 minikube.k8s.io/primary=true
	I1124 09:30:38.993988  350567 ops.go:34] apiserver oom_adj: -16
	I1124 09:30:39.059843  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1124 09:30:36.879966  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:38.880135  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:41.379898  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:39.560572  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:40.059940  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:40.560087  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:41.060193  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:41.560831  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:42.059938  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:42.560813  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:43.059947  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:43.560941  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:43.633307  350567 kubeadm.go:1114] duration metric: took 4.64904392s to wait for elevateKubeSystemPrivileges
	I1124 09:30:43.633356  350567 kubeadm.go:403] duration metric: took 14.701415807s to StartCluster
	I1124 09:30:43.633377  350567 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:43.633432  350567 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:43.634680  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:43.634890  350567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:30:43.634909  350567 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:43.634960  350567 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-673346"
	I1124 09:30:43.634893  350567 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:43.634981  350567 addons.go:70] Setting default-storageclass=true in profile "embed-certs-673346"
	I1124 09:30:43.635001  350567 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-673346"
	I1124 09:30:43.634977  350567 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-673346"
	I1124 09:30:43.635127  350567 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:30:43.635080  350567 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:43.635321  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:43.635592  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:43.637167  350567 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:43.638326  350567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:43.662561  350567 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:30:43.663142  350567 addons.go:239] Setting addon default-storageclass=true in "embed-certs-673346"
	I1124 09:30:43.663183  350567 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:30:43.663673  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:43.663998  350567 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:43.664015  350567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:30:43.664062  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:43.691153  350567 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:43.691177  350567 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:30:43.691228  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:43.691586  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:43.714047  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:43.730614  350567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:30:43.771446  350567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:43.810568  350567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:43.828496  350567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:43.934529  350567 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 09:30:43.935556  350567 node_ready.go:35] waiting up to 6m0s for node "embed-certs-673346" to be "Ready" ...
	I1124 09:30:44.140071  350567 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:30:44.141027  350567 addons.go:530] duration metric: took 506.116331ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1124 09:30:43.881243  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:46.378931  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:44.438080  350567 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-673346" context rescaled to 1 replicas
	W1124 09:30:45.938541  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	W1124 09:30:47.939500  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	W1124 09:30:48.879272  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:50.879407  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:52.380637  346330 pod_ready.go:94] pod "coredns-66bc5c9577-gn9zx" is "Ready"
	I1124 09:30:52.380665  346330 pod_ready.go:86] duration metric: took 32.006923448s for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.383181  346330 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.386812  346330 pod_ready.go:94] pod "etcd-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:52.386836  346330 pod_ready.go:86] duration metric: took 3.636091ms for pod "etcd-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.388809  346330 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.392121  346330 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:52.392137  346330 pod_ready.go:86] duration metric: took 3.312038ms for pod "kube-apiserver-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.393861  346330 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.577642  346330 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:52.577666  346330 pod_ready.go:86] duration metric: took 183.789548ms for pod "kube-controller-manager-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.777477  346330 pod_ready.go:83] waiting for pod "kube-proxy-2vm2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.177318  346330 pod_ready.go:94] pod "kube-proxy-2vm2s" is "Ready"
	I1124 09:30:53.177358  346330 pod_ready.go:86] duration metric: took 399.857272ms for pod "kube-proxy-2vm2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.377464  346330 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.777288  346330 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:53.777312  346330 pod_ready.go:86] duration metric: took 399.822555ms for pod "kube-scheduler-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.777323  346330 pod_ready.go:40] duration metric: took 33.407838856s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:53.820139  346330 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:30:53.822727  346330 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-164377" cluster and "default" namespace by default
	W1124 09:30:49.939645  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	W1124 09:30:52.439029  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	I1124 09:30:54.938888  350567 node_ready.go:49] node "embed-certs-673346" is "Ready"
	I1124 09:30:54.938913  350567 node_ready.go:38] duration metric: took 11.003315497s for node "embed-certs-673346" to be "Ready" ...
	I1124 09:30:54.938926  350567 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:30:54.938977  350567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:30:54.950812  350567 api_server.go:72] duration metric: took 11.315807298s to wait for apiserver process to appear ...
	I1124 09:30:54.950847  350567 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:30:54.950868  350567 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:30:54.956132  350567 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 09:30:54.957173  350567 api_server.go:141] control plane version: v1.34.2
	I1124 09:30:54.957194  350567 api_server.go:131] duration metric: took 6.340368ms to wait for apiserver health ...
	I1124 09:30:54.957201  350567 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:30:54.960442  350567 system_pods.go:59] 8 kube-system pods found
	I1124 09:30:54.960470  350567 system_pods.go:61] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:54.960475  350567 system_pods.go:61] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:54.960481  350567 system_pods.go:61] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:54.960484  350567 system_pods.go:61] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:54.960489  350567 system_pods.go:61] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:54.960492  350567 system_pods.go:61] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:54.960495  350567 system_pods.go:61] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:54.960503  350567 system_pods.go:61] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:54.960507  350567 system_pods.go:74] duration metric: took 3.301271ms to wait for pod list to return data ...
	I1124 09:30:54.960515  350567 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:30:54.962760  350567 default_sa.go:45] found service account: "default"
	I1124 09:30:54.962777  350567 default_sa.go:55] duration metric: took 2.256858ms for default service account to be created ...
	I1124 09:30:54.962784  350567 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:30:54.967150  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:54.967177  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:54.967185  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:54.967193  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:54.967199  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:54.967205  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:54.967216  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:54.967226  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:54.967234  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:54.967264  350567 retry.go:31] will retry after 253.013546ms: missing components: kube-dns
	I1124 09:30:55.224543  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:55.224572  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:55.224578  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:55.224584  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:55.224589  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:55.224595  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:55.224599  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:55.224604  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:55.224619  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:55.224640  350567 retry.go:31] will retry after 278.082193ms: missing components: kube-dns
	I1124 09:30:55.506580  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:55.506609  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:55.506618  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:55.506625  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:55.506630  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:55.506636  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:55.506641  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:55.506646  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:55.506661  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:55.506688  350567 retry.go:31] will retry after 307.004154ms: missing components: kube-dns
	I1124 09:30:55.818537  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:55.818854  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:55.818862  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:55.818868  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:55.818872  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:55.818877  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:55.818881  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:55.818885  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:55.818890  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:55.818908  350567 retry.go:31] will retry after 519.354598ms: missing components: kube-dns
	I1124 09:30:56.341803  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:56.341831  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Running
	I1124 09:30:56.341837  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:56.341841  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:56.341845  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:56.341849  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:56.341853  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:56.341856  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:56.341861  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Running
	I1124 09:30:56.341873  350567 system_pods.go:126] duration metric: took 1.379080603s to wait for k8s-apps to be running ...
	I1124 09:30:56.341884  350567 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:30:56.341932  350567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:56.354699  350567 system_svc.go:56] duration metric: took 12.804001ms WaitForService to wait for kubelet
	I1124 09:30:56.354739  350567 kubeadm.go:587] duration metric: took 12.719737164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:56.354759  350567 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:30:56.357637  350567 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:30:56.357683  350567 node_conditions.go:123] node cpu capacity is 8
	I1124 09:30:56.357699  350567 node_conditions.go:105] duration metric: took 2.935054ms to run NodePressure ...
	I1124 09:30:56.357714  350567 start.go:242] waiting for startup goroutines ...
	I1124 09:30:56.357729  350567 start.go:247] waiting for cluster config update ...
	I1124 09:30:56.357742  350567 start.go:256] writing updated cluster config ...
	I1124 09:30:56.358065  350567 ssh_runner.go:195] Run: rm -f paused
	I1124 09:30:56.361813  350567 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:56.364971  350567 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vgl62" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.368743  350567 pod_ready.go:94] pod "coredns-66bc5c9577-vgl62" is "Ready"
	I1124 09:30:56.368763  350567 pod_ready.go:86] duration metric: took 3.773601ms for pod "coredns-66bc5c9577-vgl62" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.370575  350567 pod_ready.go:83] waiting for pod "etcd-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.374046  350567 pod_ready.go:94] pod "etcd-embed-certs-673346" is "Ready"
	I1124 09:30:56.374066  350567 pod_ready.go:86] duration metric: took 3.473581ms for pod "etcd-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.375786  350567 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.379022  350567 pod_ready.go:94] pod "kube-apiserver-embed-certs-673346" is "Ready"
	I1124 09:30:56.379041  350567 pod_ready.go:86] duration metric: took 3.236137ms for pod "kube-apiserver-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.380667  350567 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.765494  350567 pod_ready.go:94] pod "kube-controller-manager-embed-certs-673346" is "Ready"
	I1124 09:30:56.765522  350567 pod_ready.go:86] duration metric: took 384.833249ms for pod "kube-controller-manager-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.965563  350567 pod_ready.go:83] waiting for pod "kube-proxy-m54gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.365568  350567 pod_ready.go:94] pod "kube-proxy-m54gs" is "Ready"
	I1124 09:30:57.365594  350567 pod_ready.go:86] duration metric: took 400.007869ms for pod "kube-proxy-m54gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.566275  350567 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.966130  350567 pod_ready.go:94] pod "kube-scheduler-embed-certs-673346" is "Ready"
	I1124 09:30:57.966154  350567 pod_ready.go:86] duration metric: took 399.858862ms for pod "kube-scheduler-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.966168  350567 pod_ready.go:40] duration metric: took 1.604321652s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:58.012793  350567 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:30:58.014766  350567 out.go:179] * Done! kubectl is now configured to use "embed-certs-673346" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.944842711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.945032835Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/facbdeb260606a1d4513f748d22dfcd0b47bdcf6695c6169eba61e4b3801e2bb/merged/etc/passwd: no such file or directory"
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.945067925Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/facbdeb260606a1d4513f748d22dfcd0b47bdcf6695c6169eba61e4b3801e2bb/merged/etc/group: no such file or directory"
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.945392863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.972254661Z" level=info msg="Created container 2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6: kube-system/storage-provisioner/storage-provisioner" id=b4164a14-e737-4897-9ecb-87a7e9f088a5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.972893518Z" level=info msg="Starting container: 2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6" id=dde07ce9-d76a-45ba-99c4-5188042a1879 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.974758842Z" level=info msg="Started container" PID=1708 containerID=2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6 description=kube-system/storage-provisioner/storage-provisioner id=dde07ce9-d76a-45ba-99c4-5188042a1879 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd16e913a5558a47c96ac727c0ac6fbb13f439f5653eaf3e7c71cbb1f46c9347
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.203828191Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.208119456Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.208147384Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.208172571Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.211844652Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.211869175Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.211895059Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.215752066Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.215773423Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.215790247Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.219211882Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.219239645Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.219259502Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.22269139Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.222716546Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.22273718Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.226039997Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.226069041Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	2beca2bae5993       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   bd16e913a5558       storage-provisioner                                    kube-system
	f4d186acf5471       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   46cf238b43e84       dashboard-metrics-scraper-6ffb444bf9-vv9dn             kubernetes-dashboard
	185c940805d0b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   b792fb5e586ff       kubernetes-dashboard-855c9754f9-2hcnx                  kubernetes-dashboard
	1fe2a5522f1a1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   b3c7c0fcea6db       busybox                                                default
	8fd30725243cf       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   731d4a9783396       coredns-66bc5c9577-gn9zx                               kube-system
	d1d380122c482       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           50 seconds ago      Running             kube-proxy                  0                   923fea32a34af       kube-proxy-2vm2s                                       kube-system
	35c10215bee00       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   bd16e913a5558       storage-provisioner                                    kube-system
	cd5ff3cd6ed0a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   588108ec218f4       kindnet-kwvs7                                          kube-system
	dc45893b62892       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   302d82401367d       kube-scheduler-default-k8s-diff-port-164377            kube-system
	892148c6b52c0       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   d1b3f6865e5ea       kube-controller-manager-default-k8s-diff-port-164377   kube-system
	4b1b6ab34f1c3       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   9757bc356b892       kube-apiserver-default-k8s-diff-port-164377            kube-system
	4e50e4dcdd36d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   1d778d2764400       etcd-default-k8s-diff-port-164377                      kube-system
	
	
	==> coredns [8fd30725243cf74c22d0fc9ddf7c963a305c1829d64d4bfeaa81eec4f11cb627] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38974 - 37521 "HINFO IN 1023710936344039766.7822191422356292246. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012097427s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-164377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-164377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=default-k8s-diff-port-164377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_29_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:29:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-164377
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:30:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:30:59 +0000   Mon, 24 Nov 2025 09:29:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:30:59 +0000   Mon, 24 Nov 2025 09:29:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:30:59 +0000   Mon, 24 Nov 2025 09:29:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:30:59 +0000   Mon, 24 Nov 2025 09:29:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-164377
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                89405ce7-5c63-4de3-9dc9-d223bdf4644b
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-gn9zx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-164377                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-kwvs7                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-default-k8s-diff-port-164377             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-164377    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-2vm2s                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-default-k8s-diff-port-164377             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vv9dn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2hcnx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node default-k8s-diff-port-164377 event: Registered Node default-k8s-diff-port-164377 in Controller
	  Normal  NodeReady                96s                kubelet          Node default-k8s-diff-port-164377 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node default-k8s-diff-port-164377 event: Registered Node default-k8s-diff-port-164377 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7] <==
	{"level":"info","ts":"2025-11-24T09:30:19.160092Z","caller":"traceutil/trace.go:172","msg":"trace[1565720357] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:441; }","duration":"334.240596ms","start":"2025-11-24T09:30:18.825834Z","end":"2025-11-24T09:30:19.160075Z","steps":["trace[1565720357] 'agreement among raft nodes before linearized reading'  (duration: 183.592552ms)","trace[1565720357] 'range keys from in-memory index tree'  (duration: 150.381511ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:30:19.160133Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T09:30:18.825829Z","time spent":"334.290788ms","remote":"127.0.0.1:60150","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":202,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 "}
	{"level":"warn","ts":"2025-11-24T09:30:19.160441Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.422902ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597273599602564 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" mod_revision:390 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" value_size:6084 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:30:19.160559Z","caller":"traceutil/trace.go:172","msg":"trace[1114847738] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"372.686172ms","start":"2025-11-24T09:30:18.787846Z","end":"2025-11-24T09:30:19.160532Z","steps":["trace[1114847738] 'process raft request'  (duration: 221.619761ms)","trace[1114847738] 'compare'  (duration: 150.309022ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:30:19.160615Z","caller":"traceutil/trace.go:172","msg":"trace[519783473] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:460; }","duration":"151.259118ms","start":"2025-11-24T09:30:19.009347Z","end":"2025-11-24T09:30:19.160606Z","steps":["trace[519783473] 'read index received'  (duration: 27.932µs)","trace[519783473] 'applied index is now lower than readState.Index'  (duration: 151.230743ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:30:19.160626Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T09:30:18.787827Z","time spent":"372.761296ms","remote":"127.0.0.1:60106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6152,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" mod_revision:390 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" value_size:6084 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" > >"}
	{"level":"info","ts":"2025-11-24T09:30:19.160683Z","caller":"traceutil/trace.go:172","msg":"trace[1473366631] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"338.533525ms","start":"2025-11-24T09:30:18.822144Z","end":"2025-11-24T09:30:19.160677Z","steps":["trace[1473366631] 'process raft request'  (duration: 338.382471ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.160737Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T09:30:18.822117Z","time spent":"338.586901ms","remote":"127.0.0.1:60240","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-164377\" mod_revision:439 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-164377\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-164377\" > >"}
	{"level":"info","ts":"2025-11-24T09:30:19.160739Z","caller":"traceutil/trace.go:172","msg":"trace[1434946300] transaction","detail":"{read_only:false; number_of_response:0; response_revision:442; }","duration":"362.587978ms","start":"2025-11-24T09:30:18.798141Z","end":"2025-11-24T09:30:19.160729Z","steps":["trace[1434946300] 'process raft request'  (duration: 362.349414ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.160817Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T09:30:18.798117Z","time spent":"362.64817ms","remote":"127.0.0.1:60094","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/minions/default-k8s-diff-port-164377\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/default-k8s-diff-port-164377\" value_size:4255 >> failure:<>"}
	{"level":"warn","ts":"2025-11-24T09:30:19.160926Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"291.173708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2025-11-24T09:30:19.160954Z","caller":"traceutil/trace.go:172","msg":"trace[1760795215] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:444; }","duration":"291.204856ms","start":"2025-11-24T09:30:18.869741Z","end":"2025-11-24T09:30:19.160946Z","steps":["trace[1760795215] 'agreement among raft nodes before linearized reading'  (duration: 291.105426ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.161063Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"234.130378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"warn","ts":"2025-11-24T09:30:19.161100Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"292.101948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-kwvs7\" limit:1 ","response":"range_response_count:1 size:5431"}
	{"level":"info","ts":"2025-11-24T09:30:19.161120Z","caller":"traceutil/trace.go:172","msg":"trace[827457512] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:444; }","duration":"234.196472ms","start":"2025-11-24T09:30:18.926915Z","end":"2025-11-24T09:30:19.161111Z","steps":["trace[827457512] 'agreement among raft nodes before linearized reading'  (duration: 234.088335ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:30:19.161153Z","caller":"traceutil/trace.go:172","msg":"trace[1357198967] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-kwvs7; range_end:; response_count:1; response_revision:444; }","duration":"292.152369ms","start":"2025-11-24T09:30:18.868981Z","end":"2025-11-24T09:30:19.161133Z","steps":["trace[1357198967] 'agreement among raft nodes before linearized reading'  (duration: 291.934913ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.161164Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"234.271667ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-24T09:30:19.161189Z","caller":"traceutil/trace.go:172","msg":"trace[2061821447] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:444; }","duration":"234.300263ms","start":"2025-11-24T09:30:18.926882Z","end":"2025-11-24T09:30:19.161182Z","steps":["trace[2061821447] 'agreement among raft nodes before linearized reading'  (duration: 234.191561ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.161221Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.109366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2025-11-24T09:30:19.161233Z","caller":"traceutil/trace.go:172","msg":"trace[907757436] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"330.218453ms","start":"2025-11-24T09:30:18.831006Z","end":"2025-11-24T09:30:19.161224Z","steps":["trace[907757436] 'process raft request'  (duration: 329.564455ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:30:19.161246Z","caller":"traceutil/trace.go:172","msg":"trace[1598513077] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:444; }","duration":"237.138703ms","start":"2025-11-24T09:30:18.924101Z","end":"2025-11-24T09:30:19.161240Z","steps":["trace[1598513077] 'agreement among raft nodes before linearized reading'  (duration: 237.063136ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.161284Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T09:30:18.830986Z","time spent":"330.26498ms","remote":"127.0.0.1:60240","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-f57ddjn4mhkeibnj2dianoa7ju\" mod_revision:435 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-f57ddjn4mhkeibnj2dianoa7ju\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-f57ddjn4mhkeibnj2dianoa7ju\" > >"}
	{"level":"info","ts":"2025-11-24T09:30:19.584306Z","caller":"traceutil/trace.go:172","msg":"trace[689755430] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"124.922943ms","start":"2025-11-24T09:30:19.459364Z","end":"2025-11-24T09:30:19.584287Z","steps":["trace[689755430] 'process raft request'  (duration: 124.854602ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:30:19.584348Z","caller":"traceutil/trace.go:172","msg":"trace[80575847] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"125.649582ms","start":"2025-11-24T09:30:19.458651Z","end":"2025-11-24T09:30:19.584300Z","steps":["trace[80575847] 'process raft request'  (duration: 115.554436ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:31:09 up  1:13,  0 user,  load average: 2.74, 3.25, 2.26
	Linux default-k8s-diff-port-164377 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd5ff3cd6ed0a4412d7185ce32dcfa542107181ea6781701296539e88ec8c7f1] <==
	I1124 09:30:19.930268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:30:19.930888       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 09:30:19.931150       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:30:19.931172       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:30:19.931194       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:30:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:30:20.203819       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:30:20.203902       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:30:20.203915       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:30:20.204038       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 09:30:50.204544       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 09:30:50.204811       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 09:30:50.204935       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 09:30:50.205130       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1124 09:30:51.804083       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:30:51.804118       1 metrics.go:72] Registering metrics
	I1124 09:30:51.804210       1 controller.go:711] "Syncing nftables rules"
	I1124 09:31:00.203511       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:31:00.203581       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe] <==
	I1124 09:30:18.781904       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 09:30:18.782043       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 09:30:18.782053       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 09:30:18.789788       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 09:30:18.793397       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 09:30:18.793428       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 09:30:18.793456       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:30:18.793480       1 aggregator.go:171] initial CRD sync complete...
	I1124 09:30:18.793494       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 09:30:18.793502       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:30:18.793508       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:30:18.815087       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:30:18.821452       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:30:18.868213       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1124 09:30:19.163444       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:30:19.164878       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:30:19.293736       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:30:19.458159       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:30:19.596490       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:30:19.695834       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:30:19.755082       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.205.67"}
	I1124 09:30:19.786706       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.69.182"}
	I1124 09:30:22.190533       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:30:22.443276       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:30:22.741156       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99] <==
	I1124 09:30:22.099370       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 09:30:22.099482       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-164377"
	I1124 09:30:22.099570       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 09:30:22.100853       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 09:30:22.103305       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:30:22.104796       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 09:30:22.136626       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 09:30:22.136635       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:30:22.138063       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:30:22.138087       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:30:22.138196       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 09:30:22.138619       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 09:30:22.140017       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:30:22.141090       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:30:22.143281       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:30:22.144501       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:30:22.145589       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 09:30:22.150895       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:30:22.159018       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 09:30:22.159118       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 09:30:22.159179       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 09:30:22.159189       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 09:30:22.159198       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 09:30:22.160309       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:30:22.161326       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-proxy [d1d380122c4828f19d29ada5371570b902bc1915f5aa17fbda0cb5bb589a355f] <==
	I1124 09:30:19.836200       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:30:19.907979       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:30:20.008783       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:30:20.009373       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 09:30:20.009710       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:30:20.040240       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:30:20.040306       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:30:20.047437       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:30:20.048372       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:30:20.048510       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:30:20.050104       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:30:20.050152       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:30:20.050183       1 config.go:200] "Starting service config controller"
	I1124 09:30:20.050190       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:30:20.050225       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:30:20.050231       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:30:20.050257       1 config.go:309] "Starting node config controller"
	I1124 09:30:20.050262       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:30:20.151317       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:30:20.151328       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:30:20.151378       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:30:20.151393       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8] <==
	I1124 09:30:16.276631       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:30:18.704038       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:30:18.704076       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:30:18.704103       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:30:18.704113       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:30:18.732935       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 09:30:18.732968       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:30:18.735976       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:30:18.736396       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:30:18.736486       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:30:18.736592       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:30:18.836784       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:30:22 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:22.273410     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 09:30:22 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:22.749448     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/26b38e76-0b44-4ea1-87db-97ff20b2a167-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-2hcnx\" (UID: \"26b38e76-0b44-4ea1-87db-97ff20b2a167\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2hcnx"
	Nov 24 09:30:22 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:22.749509     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glwx7\" (UniqueName: \"kubernetes.io/projected/26b38e76-0b44-4ea1-87db-97ff20b2a167-kube-api-access-glwx7\") pod \"kubernetes-dashboard-855c9754f9-2hcnx\" (UID: \"26b38e76-0b44-4ea1-87db-97ff20b2a167\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2hcnx"
	Nov 24 09:30:22 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:22.749540     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a2079b20-bcd6-49c4-97e8-93f5ee8c31d9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vv9dn\" (UID: \"a2079b20-bcd6-49c4-97e8-93f5ee8c31d9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn"
	Nov 24 09:30:22 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:22.749563     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ncq\" (UniqueName: \"kubernetes.io/projected/a2079b20-bcd6-49c4-97e8-93f5ee8c31d9-kube-api-access-r6ncq\") pod \"dashboard-metrics-scraper-6ffb444bf9-vv9dn\" (UID: \"a2079b20-bcd6-49c4-97e8-93f5ee8c31d9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn"
	Nov 24 09:30:25 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:25.867355     726 scope.go:117] "RemoveContainer" containerID="01d6d7efbe6af9d9e43cfba32ca7dc70d87a9d1315d618c17ba1e5fc7c8b083d"
	Nov 24 09:30:26 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:26.872635     726 scope.go:117] "RemoveContainer" containerID="01d6d7efbe6af9d9e43cfba32ca7dc70d87a9d1315d618c17ba1e5fc7c8b083d"
	Nov 24 09:30:26 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:26.872772     726 scope.go:117] "RemoveContainer" containerID="1f9f2b0a34c76d32d5e1ecd4dbb476a15fd4b26cb570717e5f02d2d890d335de"
	Nov 24 09:30:26 default-k8s-diff-port-164377 kubelet[726]: E1124 09:30:26.872964     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vv9dn_kubernetes-dashboard(a2079b20-bcd6-49c4-97e8-93f5ee8c31d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn" podUID="a2079b20-bcd6-49c4-97e8-93f5ee8c31d9"
	Nov 24 09:30:27 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:27.878380     726 scope.go:117] "RemoveContainer" containerID="1f9f2b0a34c76d32d5e1ecd4dbb476a15fd4b26cb570717e5f02d2d890d335de"
	Nov 24 09:30:27 default-k8s-diff-port-164377 kubelet[726]: E1124 09:30:27.878586     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vv9dn_kubernetes-dashboard(a2079b20-bcd6-49c4-97e8-93f5ee8c31d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn" podUID="a2079b20-bcd6-49c4-97e8-93f5ee8c31d9"
	Nov 24 09:30:28 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:28.893195     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2hcnx" podStartSLOduration=1.43430086 podStartE2EDuration="6.893169789s" podCreationTimestamp="2025-11-24 09:30:22 +0000 UTC" firstStartedPulling="2025-11-24 09:30:22.999073827 +0000 UTC m=+8.307348679" lastFinishedPulling="2025-11-24 09:30:28.457942755 +0000 UTC m=+13.766217608" observedRunningTime="2025-11-24 09:30:28.892874819 +0000 UTC m=+14.201149688" watchObservedRunningTime="2025-11-24 09:30:28.893169789 +0000 UTC m=+14.201444658"
	Nov 24 09:30:34 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:34.348296     726 scope.go:117] "RemoveContainer" containerID="1f9f2b0a34c76d32d5e1ecd4dbb476a15fd4b26cb570717e5f02d2d890d335de"
	Nov 24 09:30:34 default-k8s-diff-port-164377 kubelet[726]: E1124 09:30:34.348550     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vv9dn_kubernetes-dashboard(a2079b20-bcd6-49c4-97e8-93f5ee8c31d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn" podUID="a2079b20-bcd6-49c4-97e8-93f5ee8c31d9"
	Nov 24 09:30:47 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:47.799068     726 scope.go:117] "RemoveContainer" containerID="1f9f2b0a34c76d32d5e1ecd4dbb476a15fd4b26cb570717e5f02d2d890d335de"
	Nov 24 09:30:47 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:47.928852     726 scope.go:117] "RemoveContainer" containerID="1f9f2b0a34c76d32d5e1ecd4dbb476a15fd4b26cb570717e5f02d2d890d335de"
	Nov 24 09:30:47 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:47.929092     726 scope.go:117] "RemoveContainer" containerID="f4d186acf5471d5830a3d965311af6068f0c69ebb7cd8a9a5515a4e387886672"
	Nov 24 09:30:47 default-k8s-diff-port-164377 kubelet[726]: E1124 09:30:47.929352     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vv9dn_kubernetes-dashboard(a2079b20-bcd6-49c4-97e8-93f5ee8c31d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn" podUID="a2079b20-bcd6-49c4-97e8-93f5ee8c31d9"
	Nov 24 09:30:49 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:49.937428     726 scope.go:117] "RemoveContainer" containerID="35c10215bee00d6e5d470f828ed5ac25b6fbaadd21e5d6bb0919a63b77ec7273"
	Nov 24 09:30:54 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:54.348625     726 scope.go:117] "RemoveContainer" containerID="f4d186acf5471d5830a3d965311af6068f0c69ebb7cd8a9a5515a4e387886672"
	Nov 24 09:30:54 default-k8s-diff-port-164377 kubelet[726]: E1124 09:30:54.348843     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vv9dn_kubernetes-dashboard(a2079b20-bcd6-49c4-97e8-93f5ee8c31d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn" podUID="a2079b20-bcd6-49c4-97e8-93f5ee8c31d9"
	Nov 24 09:31:07 default-k8s-diff-port-164377 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:31:07 default-k8s-diff-port-164377 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:31:07 default-k8s-diff-port-164377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:31:07 default-k8s-diff-port-164377 systemd[1]: kubelet.service: Consumed 1.670s CPU time.
	
	
	==> kubernetes-dashboard [185c940805d0b7d87ece5c74083744172e4a580f1090813538cc221bc51f08ca] <==
	2025/11/24 09:30:28 Starting overwatch
	2025/11/24 09:30:28 Using namespace: kubernetes-dashboard
	2025/11/24 09:30:28 Using in-cluster config to connect to apiserver
	2025/11/24 09:30:28 Using secret token for csrf signing
	2025/11/24 09:30:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 09:30:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 09:30:28 Successful initial request to the apiserver, version: v1.34.2
	2025/11/24 09:30:28 Generating JWE encryption key
	2025/11/24 09:30:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 09:30:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 09:30:28 Initializing JWE encryption key from synchronized object
	2025/11/24 09:30:28 Creating in-cluster Sidecar client
	2025/11/24 09:30:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:30:28 Serving insecurely on HTTP port: 9090
	2025/11/24 09:30:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6] <==
	I1124 09:30:49.987176       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:30:49.994857       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:30:49.994901       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:30:49.997126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:53.452409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:57.712973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:01.311735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:04.365141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:07.387855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:07.392861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:31:07.393050       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:31:07.393190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c66cc1a2-9dbe-4e90-b04e-0717d7b6501e", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-164377_1e66cfd6-8527-492e-9445-ae1968966606 became leader
	I1124 09:31:07.393253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-164377_1e66cfd6-8527-492e-9445-ae1968966606!
	W1124 09:31:07.395414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:07.399198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:31:07.493761       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-164377_1e66cfd6-8527-492e-9445-ae1968966606!
	W1124 09:31:09.402966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:09.407250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [35c10215bee00d6e5d470f828ed5ac25b6fbaadd21e5d6bb0919a63b77ec7273] <==
	I1124 09:30:19.731713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:30:49.740499       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377: exit status 2 (324.771713ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-164377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-164377
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-164377:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c",
	        "Created": "2025-11-24T09:28:58.752077739Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 346670,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:30:06.731691108Z",
	            "FinishedAt": "2025-11-24T09:30:05.751729354Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/hostname",
	        "HostsPath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/hosts",
	        "LogPath": "/var/lib/docker/containers/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c/83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c-json.log",
	        "Name": "/default-k8s-diff-port-164377",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-164377:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-164377",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83d485128258f73d09a5301e942ff787c5f009a20a133d4064e181a85d59c38c",
	                "LowerDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e564b5585945e60c2f402b24b24367ec1398561a143100431c9859ffabfaada3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-164377",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-164377/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-164377",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-164377",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-164377",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8800361c775545b4d965033a039d0f80fa3415b8b0e2f5e9328b13e6b4b027bd",
	            "SandboxKey": "/var/run/docker/netns/8800361c7755",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-164377": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1e00630149587d24459445d686d13d40af862a7ea70db024de88f2ab8bf6b09",
	                    "EndpointID": "66c2f64a23a6af3af90c2548023247b954e65813ae18cfc1f617ea6a329de5a4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6e:81:91:90:bb:ed",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-164377",
	                        "83d485128258"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377
E1124 09:31:10.979602    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377: exit status 2 (318.162626ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-164377 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-164377 logs -n 25: (1.081425222s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-164377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-164377 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-639420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ stop    │ -p newest-cni-639420 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-639420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ newest-cni-639420 image list --format=json                                                                                                                                                                                                           │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p newest-cni-639420 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                                                                                            │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ no-preload-938348 image list --format=json                                                                                                                                                                                                           │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p no-preload-938348 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p newest-cni-639420                                                                                                                                                                                                                                 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p no-preload-938348                                                                                                                                                                                                                                 │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p newest-cni-639420                                                                                                                                                                                                                                 │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p no-preload-938348                                                                                                                                                                                                                                 │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ default-k8s-diff-port-164377 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-673346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-164377 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-673346 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:30:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:30:14.256245  350567 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:30:14.256374  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256383  350567 out.go:374] Setting ErrFile to fd 2...
	I1124 09:30:14.256387  350567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:30:14.256590  350567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:30:14.257068  350567 out.go:368] Setting JSON to false
	I1124 09:30:14.258256  350567 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4360,"bootTime":1763972254,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:30:14.258310  350567 start.go:143] virtualization: kvm guest
	I1124 09:30:14.260266  350567 out.go:179] * [embed-certs-673346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:30:14.261445  350567 notify.go:221] Checking for updates...
	I1124 09:30:14.261485  350567 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:30:14.262753  350567 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:30:14.264083  350567 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:14.265432  350567 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:30:14.266629  350567 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:30:14.268064  350567 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:30:14.269699  350567 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:14.269849  350567 config.go:182] Loaded profile config "newest-cni-639420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.269945  350567 config.go:182] Loaded profile config "no-preload-938348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:30:14.270033  350567 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:30:14.295962  350567 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:30:14.296062  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.353929  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.34315637 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.354017  350567 docker.go:319] overlay module found
	I1124 09:30:14.355843  350567 out.go:179] * Using the docker driver based on user configuration
	I1124 09:30:14.357036  350567 start.go:309] selected driver: docker
	I1124 09:30:14.357055  350567 start.go:927] validating driver "docker" against <nil>
	I1124 09:30:14.357071  350567 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:30:14.357913  350567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:30:14.421846  350567 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:30:14.410748585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:30:14.422058  350567 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:30:14.422268  350567 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:14.423788  350567 out.go:179] * Using Docker driver with root privileges
	I1124 09:30:14.424821  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.424879  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.424889  350567 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:30:14.424949  350567 start.go:353] cluster config:
	{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:14.426196  350567 out.go:179] * Starting "embed-certs-673346" primary control-plane node in "embed-certs-673346" cluster
	I1124 09:30:14.427568  350567 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:30:14.428764  350567 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:30:14.430011  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:14.430039  350567 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:30:14.430057  350567 cache.go:65] Caching tarball of preloaded images
	I1124 09:30:14.430101  350567 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:30:14.430158  350567 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:30:14.430171  350567 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:30:14.430275  350567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:30:14.430300  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json: {Name:mk0422b133bc5e40a804c0d52d08ba9c0b2ed1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.453692  350567 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:30:14.453709  350567 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:30:14.453740  350567 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:30:14.453787  350567 start.go:360] acquireMachinesLock for embed-certs-673346: {Name:mke42f7eda6495a6293833a93353c50b3546b267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:30:14.453896  350567 start.go:364] duration metric: took 91.14µs to acquireMachinesLock for "embed-certs-673346"
	I1124 09:30:14.453926  350567 start.go:93] Provisioning new machine with config: &{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:14.453996  350567 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:30:13.147546  346330 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-164377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:13.167771  346330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:13.172050  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:13.182388  346330 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:13.182659  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.335407  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.491838  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.647119  346330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:13.647243  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:13.846371  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.028841  346330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:14.344499  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.385375  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.385396  346330 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:30:14.385438  346330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:14.415659  346330 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:14.415679  346330 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:14.415687  346330 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1124 09:30:14.415796  346330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-164377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:14.415855  346330 ssh_runner.go:195] Run: crio config
	I1124 09:30:14.467415  346330 cni.go:84] Creating CNI manager for ""
	I1124 09:30:14.467440  346330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:14.467457  346330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:30:14.467485  346330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-164377 NodeName:default-k8s-diff-port-164377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:14.467665  346330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-164377"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:14.467740  346330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:30:14.477297  346330 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:14.477386  346330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:14.486666  346330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 09:30:14.501581  346330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:30:14.516622  346330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 09:30:14.531939  346330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:14.536699  346330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:14.551687  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:14.653461  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:14.689043  346330 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377 for IP: 192.168.85.2
	I1124 09:30:14.689069  346330 certs.go:195] generating shared ca certs ...
	I1124 09:30:14.689088  346330 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:14.689257  346330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:14.689322  346330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:14.689350  346330 certs.go:257] generating profile certs ...
	I1124 09:30:14.689449  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/client.key
	I1124 09:30:14.689523  346330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key.5d8312b5
	I1124 09:30:14.689584  346330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key
	I1124 09:30:14.689713  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:14.689756  346330 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:14.689770  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:14.689805  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:14.689846  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:14.689877  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:14.689936  346330 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:14.690834  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:14.713491  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:14.733133  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:14.755304  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:14.781644  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 09:30:14.807149  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:30:14.826555  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:14.849289  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/default-k8s-diff-port-164377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:30:14.868866  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:14.900899  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:14.927265  346330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:14.951934  346330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:14.968305  346330 ssh_runner.go:195] Run: openssl version
	I1124 09:30:14.977188  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:14.988887  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993793  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:14.993849  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:15.044783  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:15.062885  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:15.073450  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078558  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.078611  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:15.125021  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:15.134555  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:15.145840  346330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150712  346330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.150766  346330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:15.193031  346330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:15.203009  346330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:15.208170  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:30:15.268668  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:30:15.330529  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:30:15.386730  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:30:15.450702  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:30:15.510222  346330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:30:15.573346  346330 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-164377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-164377 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:15.573548  346330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:15.573633  346330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:15.617052  346330 cri.go:89] found id: "dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8"
	I1124 09:30:15.617070  346330 cri.go:89] found id: "892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99"
	I1124 09:30:15.617076  346330 cri.go:89] found id: "4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe"
	I1124 09:30:15.617088  346330 cri.go:89] found id: "4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7"
	I1124 09:30:15.617092  346330 cri.go:89] found id: ""
	I1124 09:30:15.617135  346330 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:30:15.636984  346330 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:30:15Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:30:15.638440  346330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:15.649204  346330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:30:15.649226  346330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:30:15.649270  346330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:30:15.663887  346330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:30:15.664735  346330 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-164377" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.665194  346330 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-164377" cluster setting kubeconfig missing "default-k8s-diff-port-164377" context setting]
	I1124 09:30:15.666227  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.668691  346330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:30:15.680140  346330 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 09:30:15.680180  346330 kubeadm.go:602] duration metric: took 30.938163ms to restartPrimaryControlPlane
	I1124 09:30:15.680189  346330 kubeadm.go:403] duration metric: took 106.868907ms to StartCluster
	I1124 09:30:15.680202  346330 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.680258  346330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:15.681803  346330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:15.682046  346330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:15.682240  346330 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:15.682422  346330 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682447  346330 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682456  346330 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:30:15.682523  346330 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682554  346330 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.682573  346330 addons.go:248] addon dashboard should already be in state true
	I1124 09:30:15.682612  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.682679  346330 config.go:182] Loaded profile config "default-k8s-diff-port-164377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:15.682721  346330 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-164377"
	I1124 09:30:15.682735  346330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-164377"
	I1124 09:30:15.683004  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683176  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683179  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.683615  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.683830  346330 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:15.685123  346330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:15.719127  346330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:30:15.719844  346330 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-164377"
	W1124 09:30:15.719950  346330 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:30:15.720006  346330 host.go:66] Checking if "default-k8s-diff-port-164377" exists ...
	I1124 09:30:15.720557  346330 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-164377 --format={{.State.Status}}
	I1124 09:30:15.721200  346330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:30:15.721225  346330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:30:15.722276  346330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.722291  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:30:15.722368  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.722497  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:30:15.722505  346330 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:30:15.722550  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.760598  346330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:15.760694  346330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:30:15.760791  346330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-164377
	I1124 09:30:15.761102  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.768663  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.809271  346330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/default-k8s-diff-port-164377/id_rsa Username:docker}
	I1124 09:30:15.913227  346330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:15.931974  346330 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:30:15.958496  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:30:15.958523  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:30:15.961696  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:15.982191  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:30:15.982217  346330 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:30:15.984451  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:16.003515  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:30:16.003603  346330 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:30:16.025926  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:30:16.025949  346330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:30:16.049115  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:30:16.049141  346330 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:30:16.070292  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:30:16.070316  346330 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:30:16.087883  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:30:16.087909  346330 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:30:16.107837  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:30:16.107859  346330 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:30:16.130726  346330 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:30:16.130811  346330 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:30:16.152225  346330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:30:14.455914  350567 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:30:14.456108  350567 start.go:159] libmachine.API.Create for "embed-certs-673346" (driver="docker")
	I1124 09:30:14.456138  350567 client.go:173] LocalClient.Create starting
	I1124 09:30:14.456212  350567 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem
	I1124 09:30:14.456244  350567 main.go:143] libmachine: Decoding PEM data...
	I1124 09:30:14.456264  350567 main.go:143] libmachine: Parsing certificate...
	I1124 09:30:14.456310  350567 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem
	I1124 09:30:14.456355  350567 main.go:143] libmachine: Decoding PEM data...
	I1124 09:30:14.456379  350567 main.go:143] libmachine: Parsing certificate...
	I1124 09:30:14.456793  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:30:14.478660  350567 cli_runner.go:211] docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:30:14.478755  350567 network_create.go:284] running [docker network inspect embed-certs-673346] to gather additional debugging logs...
	I1124 09:30:14.478786  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346
	W1124 09:30:14.498235  350567 cli_runner.go:211] docker network inspect embed-certs-673346 returned with exit code 1
	I1124 09:30:14.498267  350567 network_create.go:287] error running [docker network inspect embed-certs-673346]: docker network inspect embed-certs-673346: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-673346 not found
	I1124 09:30:14.498281  350567 network_create.go:289] output of [docker network inspect embed-certs-673346]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-673346 not found
	
	** /stderr **
	I1124 09:30:14.498385  350567 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:14.520018  350567 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2543a3a5b30f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:09:61:f4:32:5e} reservation:<nil>}
	I1124 09:30:14.520793  350567 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c977c796f084 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:34:cc:6d:f9:2b} reservation:<nil>}
	I1124 09:30:14.521788  350567 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2994a163bb80 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:ca:61:f0:c2:2e} reservation:<nil>}
	I1124 09:30:14.522707  350567 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5a70}
	I1124 09:30:14.522732  350567 network_create.go:124] attempt to create docker network embed-certs-673346 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 09:30:14.522785  350567 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-673346 embed-certs-673346
	I1124 09:30:14.586516  350567 network_create.go:108] docker network embed-certs-673346 192.168.76.0/24 created
	I1124 09:30:14.586547  350567 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-673346" container
	I1124 09:30:14.586627  350567 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:30:14.610804  350567 cli_runner.go:164] Run: docker volume create embed-certs-673346 --label name.minikube.sigs.k8s.io=embed-certs-673346 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:30:14.632832  350567 oci.go:103] Successfully created a docker volume embed-certs-673346
	I1124 09:30:14.632925  350567 cli_runner.go:164] Run: docker run --rm --name embed-certs-673346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-673346 --entrypoint /usr/bin/test -v embed-certs-673346:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:30:15.090593  350567 oci.go:107] Successfully prepared a docker volume embed-certs-673346
	I1124 09:30:15.090677  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:15.090690  350567 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:30:15.090748  350567 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-673346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:30:18.723622  346330 node_ready.go:49] node "default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:18.723658  346330 node_ready.go:38] duration metric: took 2.791273581s for node "default-k8s-diff-port-164377" to be "Ready" ...
	I1124 09:30:18.723674  346330 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:30:18.723726  346330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:30:19.762798  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.801068095s)
	I1124 09:30:19.762854  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.778333862s)
	I1124 09:30:19.809952  346330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.086204016s)
	I1124 09:30:19.809990  346330 api_server.go:72] duration metric: took 4.127914679s to wait for apiserver process to appear ...
	I1124 09:30:19.809999  346330 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:30:19.810019  346330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:30:19.810840  346330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.65851014s)
	I1124 09:30:19.812854  346330 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-164377 addons enable metrics-server
	
	I1124 09:30:19.814608  346330 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 09:30:19.815981  346330 addons.go:530] duration metric: took 4.133745613s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 09:30:19.819288  346330 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:30:19.819490  346330 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:30:20.310801  346330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:30:20.318184  346330 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 09:30:20.320089  346330 api_server.go:141] control plane version: v1.34.2
	I1124 09:30:20.320238  346330 api_server.go:131] duration metric: took 510.229099ms to wait for apiserver health ...
	I1124 09:30:20.320485  346330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:30:20.328441  346330 system_pods.go:59] 8 kube-system pods found
	I1124 09:30:20.328478  346330 system_pods.go:61] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:20.328490  346330 system_pods.go:61] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:30:20.328498  346330 system_pods.go:61] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:30:20.328506  346330 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:30:20.328515  346330 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:30:20.328521  346330 system_pods.go:61] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:30:20.328529  346330 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:30:20.328534  346330 system_pods.go:61] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Running
	I1124 09:30:20.328541  346330 system_pods.go:74] duration metric: took 7.85104ms to wait for pod list to return data ...
	I1124 09:30:20.328554  346330 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:30:20.332981  346330 default_sa.go:45] found service account: "default"
	I1124 09:30:20.333009  346330 default_sa.go:55] duration metric: took 4.449084ms for default service account to be created ...
	I1124 09:30:20.333021  346330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:30:20.338641  346330 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:20.338682  346330 system_pods.go:89] "coredns-66bc5c9577-gn9zx" [d4debacc-a7df-4bc4-9d87-249d44299f91] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:20.338698  346330 system_pods.go:89] "etcd-default-k8s-diff-port-164377" [cade151c-fd1c-449a-970a-209655d139e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:30:20.338709  346330 system_pods.go:89] "kindnet-kwvs7" [1a42b3c2-0e78-4e1d-9d47-e4b7ce5bdb07] Running
	I1124 09:30:20.338718  346330 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-164377" [86ac4fd2-8980-46f6-84f0-8e9634c4e05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:30:20.338727  346330 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-164377" [0ac3edd4-0253-48fb-81be-d68145d12846] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:30:20.338734  346330 system_pods.go:89] "kube-proxy-2vm2s" [137008cc-e397-4752-952e-f66903bce62a] Running
	I1124 09:30:20.338741  346330 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-164377" [6278da03-1c89-4fbd-941b-b75d4909e9d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:30:20.338747  346330 system_pods.go:89] "storage-provisioner" [829aa957-d18b-4e5d-b3ae-dca550b9db5d] Running
	I1124 09:30:20.338757  346330 system_pods.go:126] duration metric: took 5.728957ms to wait for k8s-apps to be running ...
	I1124 09:30:20.338767  346330 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:30:20.338820  346330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:20.357733  346330 system_svc.go:56] duration metric: took 18.956624ms WaitForService to wait for kubelet
	I1124 09:30:20.358599  346330 kubeadm.go:587] duration metric: took 4.676515085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:20.358629  346330 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:30:20.363231  346330 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:30:20.363257  346330 node_conditions.go:123] node cpu capacity is 8
	I1124 09:30:20.363289  346330 node_conditions.go:105] duration metric: took 4.654352ms to run NodePressure ...
	I1124 09:30:20.363303  346330 start.go:242] waiting for startup goroutines ...
	I1124 09:30:20.363313  346330 start.go:247] waiting for cluster config update ...
	I1124 09:30:20.363345  346330 start.go:256] writing updated cluster config ...
	I1124 09:30:20.363650  346330 ssh_runner.go:195] Run: rm -f paused
	I1124 09:30:20.369452  346330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:20.373717  346330 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:19.611672  350567 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-673346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.520871166s)
	I1124 09:30:19.611715  350567 kic.go:203] duration metric: took 4.521020447s to extract preloaded images to volume ...
	W1124 09:30:19.612119  350567 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:30:19.612200  350567 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:30:19.612273  350567 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:30:19.706294  350567 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-673346 --name embed-certs-673346 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-673346 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-673346 --network embed-certs-673346 --ip 192.168.76.2 --volume embed-certs-673346:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:30:20.123957  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Running}}
	I1124 09:30:20.146501  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:20.169846  350567 cli_runner.go:164] Run: docker exec embed-certs-673346 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:30:20.229570  350567 oci.go:144] the created container "embed-certs-673346" has a running status.
	I1124 09:30:20.229610  350567 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa...
	I1124 09:30:20.290959  350567 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:30:20.332257  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:20.366886  350567 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:30:20.366912  350567 kic_runner.go:114] Args: [docker exec --privileged embed-certs-673346 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:30:20.421029  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:20.448864  350567 machine.go:94] provisionDockerMachine start ...
	I1124 09:30:20.448975  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:20.471107  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:20.471475  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:20.471493  350567 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:30:20.472225  350567 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52462->127.0.0.1:33133: read: connection reset by peer
	I1124 09:30:23.653448  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-673346
	
	I1124 09:30:23.653510  350567 ubuntu.go:182] provisioning hostname "embed-certs-673346"
	I1124 09:30:23.653756  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:23.678607  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:23.678937  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:23.678958  350567 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-673346 && echo "embed-certs-673346" | sudo tee /etc/hostname
	I1124 09:30:23.850425  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-673346
	
	I1124 09:30:23.850503  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:23.874386  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:23.874730  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:23.874760  350567 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-673346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-673346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-673346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:30:24.034104  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:30:24.034135  350567 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 09:30:24.034160  350567 ubuntu.go:190] setting up certificates
	I1124 09:30:24.034174  350567 provision.go:84] configureAuth start
	I1124 09:30:24.034235  350567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:30:24.056481  350567 provision.go:143] copyHostCerts
	I1124 09:30:24.056552  350567 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem, removing ...
	I1124 09:30:24.056564  350567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem
	I1124 09:30:24.056628  350567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 09:30:24.056755  350567 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem, removing ...
	I1124 09:30:24.056763  350567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem
	I1124 09:30:24.056806  350567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 09:30:24.056918  350567 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem, removing ...
	I1124 09:30:24.056931  350567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem
	I1124 09:30:24.056973  350567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 09:30:24.057091  350567 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.embed-certs-673346 san=[127.0.0.1 192.168.76.2 embed-certs-673346 localhost minikube]
	I1124 09:30:24.206865  350567 provision.go:177] copyRemoteCerts
	I1124 09:30:24.206922  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:30:24.206955  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.226403  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	W1124 09:30:22.380052  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:24.380391  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:24.331162  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:30:24.354961  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:30:24.377647  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:30:24.400776  350567 provision.go:87] duration metric: took 366.587357ms to configureAuth
	I1124 09:30:24.400805  350567 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:30:24.400996  350567 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:24.401117  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.424078  350567 main.go:143] libmachine: Using SSH client type: native
	I1124 09:30:24.424426  350567 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1124 09:30:24.424457  350567 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:30:24.754396  350567 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:30:24.754421  350567 machine.go:97] duration metric: took 4.30553632s to provisionDockerMachine
	I1124 09:30:24.754433  350567 client.go:176] duration metric: took 10.29828879s to LocalClient.Create
	I1124 09:30:24.754450  350567 start.go:167] duration metric: took 10.298341795s to libmachine.API.Create "embed-certs-673346"
	I1124 09:30:24.754459  350567 start.go:293] postStartSetup for "embed-certs-673346" (driver="docker")
	I1124 09:30:24.754471  350567 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:30:24.754538  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:30:24.754583  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.780786  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:24.896450  350567 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:30:24.900141  350567 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:30:24.900169  350567 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:30:24.900181  350567 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:30:24.900238  350567 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:30:24.900352  350567 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:30:24.900469  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:30:24.908686  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:24.930429  350567 start.go:296] duration metric: took 175.958432ms for postStartSetup
	I1124 09:30:24.930756  350567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:30:24.951946  350567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:30:24.952213  350567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:30:24.952254  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:24.971774  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:25.084790  350567 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:30:25.089773  350567 start.go:128] duration metric: took 10.635765016s to createHost
	I1124 09:30:25.089794  350567 start.go:83] releasing machines lock for "embed-certs-673346", held for 10.635883769s
	I1124 09:30:25.089855  350567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:30:25.107834  350567 ssh_runner.go:195] Run: cat /version.json
	I1124 09:30:25.107876  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:25.107876  350567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:30:25.107963  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:25.126027  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:25.127155  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:25.225482  350567 ssh_runner.go:195] Run: systemctl --version
	I1124 09:30:25.285543  350567 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:30:25.321857  350567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:30:25.327941  350567 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:30:25.328019  350567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:30:25.524839  350567 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:30:25.524863  350567 start.go:496] detecting cgroup driver to use...
	I1124 09:30:25.524891  350567 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:30:25.524934  350567 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:30:25.542024  350567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:30:25.555182  350567 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:30:25.555243  350567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:30:25.572649  350567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:30:25.594452  350567 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:30:25.688181  350567 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:30:25.780701  350567 docker.go:234] disabling docker service ...
	I1124 09:30:25.780765  350567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:30:25.801555  350567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:30:25.816167  350567 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:30:25.936230  350567 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:30:26.054601  350567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:30:26.071219  350567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:30:26.089974  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:26.264765  350567 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:30:26.264839  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.285104  350567 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:30:26.285169  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.296551  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.307239  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.318284  350567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:30:26.328483  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.339222  350567 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.356669  350567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:30:26.367765  350567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:30:26.377490  350567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:30:26.386986  350567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:26.502123  350567 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:30:26.740769  350567 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:30:26.740830  350567 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:30:26.745782  350567 start.go:564] Will wait 60s for crictl version
	I1124 09:30:26.745832  350567 ssh_runner.go:195] Run: which crictl
	I1124 09:30:26.750426  350567 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:30:26.783507  350567 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:30:26.783585  350567 ssh_runner.go:195] Run: crio --version
	I1124 09:30:26.821826  350567 ssh_runner.go:195] Run: crio --version
	I1124 09:30:26.866519  350567 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 09:30:26.868046  350567 cli_runner.go:164] Run: docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:30:26.895427  350567 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 09:30:26.900350  350567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:26.913358  350567 kubeadm.go:884] updating cluster {Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:30:26.913735  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.099545  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.285631  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.447699  350567 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:30:27.447838  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.627950  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.802057  350567 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:30:27.982378  350567 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:28.023587  350567 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:28.023612  350567 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:30:28.023667  350567 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:30:28.057634  350567 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:30:28.057658  350567 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:30:28.057667  350567 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1124 09:30:28.057782  350567 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-673346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:30:28.057861  350567 ssh_runner.go:195] Run: crio config
	I1124 09:30:28.125113  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:28.125141  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:28.125163  350567 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:30:28.125194  350567 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-673346 NodeName:embed-certs-673346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:30:28.125384  350567 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-673346"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:30:28.125457  350567 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:30:28.136211  350567 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:30:28.136278  350567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:30:28.146766  350567 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 09:30:28.162970  350567 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:30:28.183026  350567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 09:30:28.199769  350567 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:30:28.204631  350567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:30:28.216670  350567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:28.333908  350567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:28.358960  350567 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346 for IP: 192.168.76.2
	I1124 09:30:28.358982  350567 certs.go:195] generating shared ca certs ...
	I1124 09:30:28.359000  350567 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.359152  350567 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:30:28.359204  350567 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:30:28.359216  350567 certs.go:257] generating profile certs ...
	I1124 09:30:28.359284  350567 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key
	I1124 09:30:28.359301  350567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.crt with IP's: []
	I1124 09:30:28.437471  350567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.crt ...
	I1124 09:30:28.437495  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.crt: {Name:mk8b7253b9b301c91d2672344892984576a60144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.437641  350567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key ...
	I1124 09:30:28.437654  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key: {Name:mk2a06bce20bfcf3fd65f78bc031396f7e03338b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.437728  350567 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844
	I1124 09:30:28.437742  350567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 09:30:28.481815  350567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844 ...
	I1124 09:30:28.481840  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844: {Name:mk5fa5046e27fe7d2f0e4475b095f002a239fd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.482010  350567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844 ...
	I1124 09:30:28.482030  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844: {Name:mkd08edea57155db981a087021feb4524402ea29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.482143  350567 certs.go:382] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt.f0325844 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt
	I1124 09:30:28.482230  350567 certs.go:386] copying /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844 -> /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key
	I1124 09:30:28.482292  350567 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key
	I1124 09:30:28.482308  350567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt with IP's: []
	I1124 09:30:28.544080  350567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt ...
	I1124 09:30:28.544107  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt: {Name:mkfd4e68c065efc0731596098a6a75426ddfaab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.544288  350567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key ...
	I1124 09:30:28.544305  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key: {Name:mk2884332d5edfe59fc22312877e42be26c5e588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:28.544523  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:30:28.544565  350567 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:30:28.544576  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:30:28.544600  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:30:28.544632  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:30:28.544654  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:30:28.544696  350567 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:30:28.545236  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:30:28.564379  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:30:28.581704  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:30:28.598963  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:30:28.616453  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 09:30:28.634084  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:30:28.650828  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:30:28.667240  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:30:28.683947  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:30:28.702546  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:30:28.721403  350567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:30:28.738492  350567 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:30:28.750548  350567 ssh_runner.go:195] Run: openssl version
	I1124 09:30:28.756295  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:30:28.764541  350567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:30:28.768172  350567 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:30:28.768220  350567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:30:28.802295  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:30:28.811614  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:30:28.819817  350567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:28.823382  350567 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:28.823443  350567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:30:28.856763  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:30:28.865271  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:30:28.873545  350567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:30:28.877598  350567 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:30:28.877655  350567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:30:28.919370  350567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:30:28.928260  350567 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:30:28.931896  350567 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:30:28.931943  350567 kubeadm.go:401] StartCluster: {Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:30:28.932015  350567 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:30:28.932059  350567 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:30:28.958681  350567 cri.go:89] found id: ""
	I1124 09:30:28.958744  350567 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:30:28.966642  350567 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:30:28.974403  350567 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:30:28.974471  350567 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:30:28.981638  350567 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:30:28.981659  350567 kubeadm.go:158] found existing configuration files:
	
	I1124 09:30:28.981689  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:30:28.989069  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:30:28.989126  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:30:28.996227  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:30:29.003603  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:30:29.003655  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:30:29.010559  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:30:29.017747  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:30:29.017791  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:30:29.024818  350567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:30:29.032072  350567 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:30:29.032110  350567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:30:29.039184  350567 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:30:29.109731  350567 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:30:29.170264  350567 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 09:30:26.880546  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:29.379444  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:31.879625  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:34.379563  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:38.748399  350567 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:30:38.748506  350567 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:30:38.748626  350567 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:30:38.748685  350567 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:30:38.748714  350567 kubeadm.go:319] OS: Linux
	I1124 09:30:38.748760  350567 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:30:38.748798  350567 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:30:38.748841  350567 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:30:38.748881  350567 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:30:38.748952  350567 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:30:38.749042  350567 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:30:38.749116  350567 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:30:38.749159  350567 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:30:38.749218  350567 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:30:38.749302  350567 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:30:38.749395  350567 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:30:38.749449  350567 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:30:38.750906  350567 out.go:252]   - Generating certificates and keys ...
	I1124 09:30:38.750995  350567 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:30:38.751089  350567 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:30:38.751177  350567 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:30:38.751224  350567 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:30:38.751273  350567 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:30:38.751317  350567 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:30:38.751438  350567 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:30:38.751613  350567 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-673346 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 09:30:38.751694  350567 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:30:38.751864  350567 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-673346 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 09:30:38.751935  350567 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:30:38.752013  350567 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:30:38.752054  350567 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:30:38.752103  350567 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:30:38.752193  350567 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:30:38.752302  350567 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:30:38.752409  350567 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:30:38.752476  350567 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:30:38.752520  350567 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:30:38.752585  350567 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:30:38.752640  350567 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:30:38.753908  350567 out.go:252]   - Booting up control plane ...
	I1124 09:30:38.753982  350567 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:30:38.754048  350567 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:30:38.754101  350567 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:30:38.754214  350567 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:30:38.754351  350567 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:30:38.754483  350567 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:30:38.754595  350567 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:30:38.754657  350567 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:30:38.754803  350567 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:30:38.754931  350567 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:30:38.755022  350567 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.798528ms
	I1124 09:30:38.755160  350567 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:30:38.755241  350567 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 09:30:38.755362  350567 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:30:38.755437  350567 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:30:38.755504  350567 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.263591063s
	I1124 09:30:38.755565  350567 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.451899812s
	I1124 09:30:38.755620  350567 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002226153s
	I1124 09:30:38.755712  350567 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:30:38.755827  350567 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:30:38.755921  350567 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:30:38.756130  350567 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-673346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:30:38.756180  350567 kubeadm.go:319] [bootstrap-token] Using token: s5v8q1.i02i5m2whwuijtw1
	I1124 09:30:38.757350  350567 out.go:252]   - Configuring RBAC rules ...
	I1124 09:30:38.757460  350567 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:30:38.757561  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:30:38.757739  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:30:38.757875  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:30:38.758003  350567 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:30:38.758127  350567 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:30:38.758258  350567 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:30:38.758326  350567 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:30:38.758411  350567 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:30:38.758419  350567 kubeadm.go:319] 
	I1124 09:30:38.758489  350567 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:30:38.758501  350567 kubeadm.go:319] 
	I1124 09:30:38.758566  350567 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:30:38.758571  350567 kubeadm.go:319] 
	I1124 09:30:38.758593  350567 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:30:38.758643  350567 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:30:38.758691  350567 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:30:38.758696  350567 kubeadm.go:319] 
	I1124 09:30:38.758770  350567 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:30:38.758783  350567 kubeadm.go:319] 
	I1124 09:30:38.758851  350567 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:30:38.758864  350567 kubeadm.go:319] 
	I1124 09:30:38.758912  350567 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:30:38.758992  350567 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:30:38.759090  350567 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:30:38.759098  350567 kubeadm.go:319] 
	I1124 09:30:38.759200  350567 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:30:38.759305  350567 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:30:38.759315  350567 kubeadm.go:319] 
	I1124 09:30:38.759405  350567 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s5v8q1.i02i5m2whwuijtw1 \
	I1124 09:30:38.759526  350567 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 \
	I1124 09:30:38.759550  350567 kubeadm.go:319] 	--control-plane 
	I1124 09:30:38.759555  350567 kubeadm.go:319] 
	I1124 09:30:38.759678  350567 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:30:38.759688  350567 kubeadm.go:319] 
	I1124 09:30:38.759796  350567 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s5v8q1.i02i5m2whwuijtw1 \
	I1124 09:30:38.759949  350567 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6a280816d503ad022be657010cd456d24f45710f9b10fd2cba2f60ee09c091f3 
	I1124 09:30:38.759964  350567 cni.go:84] Creating CNI manager for ""
	I1124 09:30:38.759972  350567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:30:38.761479  350567 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:30:38.762425  350567 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:30:38.766592  350567 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 09:30:38.766608  350567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:30:38.779390  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:30:38.984251  350567 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:30:38.984311  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:38.984390  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-673346 minikube.k8s.io/updated_at=2025_11_24T09_30_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=embed-certs-673346 minikube.k8s.io/primary=true
	I1124 09:30:38.993988  350567 ops.go:34] apiserver oom_adj: -16
	I1124 09:30:39.059843  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1124 09:30:36.879966  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:38.880135  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:41.379898  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:39.560572  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:40.059940  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:40.560087  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:41.060193  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:41.560831  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:42.059938  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:42.560813  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:43.059947  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:43.560941  350567 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:30:43.633307  350567 kubeadm.go:1114] duration metric: took 4.64904392s to wait for elevateKubeSystemPrivileges
	I1124 09:30:43.633356  350567 kubeadm.go:403] duration metric: took 14.701415807s to StartCluster
	I1124 09:30:43.633377  350567 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:43.633432  350567 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:30:43.634680  350567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:30:43.634890  350567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:30:43.634909  350567 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:30:43.634960  350567 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-673346"
	I1124 09:30:43.634893  350567 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:30:43.634981  350567 addons.go:70] Setting default-storageclass=true in profile "embed-certs-673346"
	I1124 09:30:43.635001  350567 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-673346"
	I1124 09:30:43.634977  350567 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-673346"
	I1124 09:30:43.635127  350567 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:30:43.635080  350567 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:30:43.635321  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:43.635592  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:43.637167  350567 out.go:179] * Verifying Kubernetes components...
	I1124 09:30:43.638326  350567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:30:43.662561  350567 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:30:43.663142  350567 addons.go:239] Setting addon default-storageclass=true in "embed-certs-673346"
	I1124 09:30:43.663183  350567 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:30:43.663673  350567 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:30:43.663998  350567 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:43.664015  350567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:30:43.664062  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:43.691153  350567 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:43.691177  350567 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:30:43.691228  350567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:30:43.691586  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:43.714047  350567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:30:43.730614  350567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:30:43.771446  350567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:30:43.810568  350567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:30:43.828496  350567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:30:43.934529  350567 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 09:30:43.935556  350567 node_ready.go:35] waiting up to 6m0s for node "embed-certs-673346" to be "Ready" ...
	I1124 09:30:44.140071  350567 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:30:44.141027  350567 addons.go:530] duration metric: took 506.116331ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1124 09:30:43.881243  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:46.378931  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:44.438080  350567 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-673346" context rescaled to 1 replicas
	W1124 09:30:45.938541  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	W1124 09:30:47.939500  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	W1124 09:30:48.879272  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	W1124 09:30:50.879407  346330 pod_ready.go:104] pod "coredns-66bc5c9577-gn9zx" is not "Ready", error: <nil>
	I1124 09:30:52.380637  346330 pod_ready.go:94] pod "coredns-66bc5c9577-gn9zx" is "Ready"
	I1124 09:30:52.380665  346330 pod_ready.go:86] duration metric: took 32.006923448s for pod "coredns-66bc5c9577-gn9zx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.383181  346330 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.386812  346330 pod_ready.go:94] pod "etcd-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:52.386836  346330 pod_ready.go:86] duration metric: took 3.636091ms for pod "etcd-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.388809  346330 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.392121  346330 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:52.392137  346330 pod_ready.go:86] duration metric: took 3.312038ms for pod "kube-apiserver-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.393861  346330 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.577642  346330 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:52.577666  346330 pod_ready.go:86] duration metric: took 183.789548ms for pod "kube-controller-manager-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:52.777477  346330 pod_ready.go:83] waiting for pod "kube-proxy-2vm2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.177318  346330 pod_ready.go:94] pod "kube-proxy-2vm2s" is "Ready"
	I1124 09:30:53.177358  346330 pod_ready.go:86] duration metric: took 399.857272ms for pod "kube-proxy-2vm2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.377464  346330 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.777288  346330 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-164377" is "Ready"
	I1124 09:30:53.777312  346330 pod_ready.go:86] duration metric: took 399.822555ms for pod "kube-scheduler-default-k8s-diff-port-164377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:53.777323  346330 pod_ready.go:40] duration metric: took 33.407838856s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:53.820139  346330 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:30:53.822727  346330 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-164377" cluster and "default" namespace by default
	W1124 09:30:49.939645  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	W1124 09:30:52.439029  350567 node_ready.go:57] node "embed-certs-673346" has "Ready":"False" status (will retry)
	I1124 09:30:54.938888  350567 node_ready.go:49] node "embed-certs-673346" is "Ready"
	I1124 09:30:54.938913  350567 node_ready.go:38] duration metric: took 11.003315497s for node "embed-certs-673346" to be "Ready" ...
	I1124 09:30:54.938926  350567 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:30:54.938977  350567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:30:54.950812  350567 api_server.go:72] duration metric: took 11.315807298s to wait for apiserver process to appear ...
	I1124 09:30:54.950847  350567 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:30:54.950868  350567 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:30:54.956132  350567 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 09:30:54.957173  350567 api_server.go:141] control plane version: v1.34.2
	I1124 09:30:54.957194  350567 api_server.go:131] duration metric: took 6.340368ms to wait for apiserver health ...
	I1124 09:30:54.957201  350567 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:30:54.960442  350567 system_pods.go:59] 8 kube-system pods found
	I1124 09:30:54.960470  350567 system_pods.go:61] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:54.960475  350567 system_pods.go:61] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:54.960481  350567 system_pods.go:61] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:54.960484  350567 system_pods.go:61] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:54.960489  350567 system_pods.go:61] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:54.960492  350567 system_pods.go:61] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:54.960495  350567 system_pods.go:61] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:54.960503  350567 system_pods.go:61] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:54.960507  350567 system_pods.go:74] duration metric: took 3.301271ms to wait for pod list to return data ...
	I1124 09:30:54.960515  350567 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:30:54.962760  350567 default_sa.go:45] found service account: "default"
	I1124 09:30:54.962777  350567 default_sa.go:55] duration metric: took 2.256858ms for default service account to be created ...
	I1124 09:30:54.962784  350567 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:30:54.967150  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:54.967177  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:54.967185  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:54.967193  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:54.967199  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:54.967205  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:54.967216  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:54.967226  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:54.967234  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:54.967264  350567 retry.go:31] will retry after 253.013546ms: missing components: kube-dns
	I1124 09:30:55.224543  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:55.224572  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:55.224578  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:55.224584  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:55.224589  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:55.224595  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:55.224599  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:55.224604  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:55.224619  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:55.224640  350567 retry.go:31] will retry after 278.082193ms: missing components: kube-dns
	I1124 09:30:55.506580  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:55.506609  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:55.506618  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:55.506625  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:55.506630  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:55.506636  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:55.506641  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:55.506646  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:55.506661  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:55.506688  350567 retry.go:31] will retry after 307.004154ms: missing components: kube-dns
	I1124 09:30:55.818537  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:55.818854  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:30:55.818862  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:55.818868  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:55.818872  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:55.818877  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:55.818881  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:55.818885  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:55.818890  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:30:55.818908  350567 retry.go:31] will retry after 519.354598ms: missing components: kube-dns
	I1124 09:30:56.341803  350567 system_pods.go:86] 8 kube-system pods found
	I1124 09:30:56.341831  350567 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Running
	I1124 09:30:56.341837  350567 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running
	I1124 09:30:56.341841  350567 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running
	I1124 09:30:56.341845  350567 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running
	I1124 09:30:56.341849  350567 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running
	I1124 09:30:56.341853  350567 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running
	I1124 09:30:56.341856  350567 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running
	I1124 09:30:56.341861  350567 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Running
	I1124 09:30:56.341873  350567 system_pods.go:126] duration metric: took 1.379080603s to wait for k8s-apps to be running ...
	I1124 09:30:56.341884  350567 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:30:56.341932  350567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:30:56.354699  350567 system_svc.go:56] duration metric: took 12.804001ms WaitForService to wait for kubelet
	I1124 09:30:56.354739  350567 kubeadm.go:587] duration metric: took 12.719737164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:30:56.354759  350567 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:30:56.357637  350567 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:30:56.357683  350567 node_conditions.go:123] node cpu capacity is 8
	I1124 09:30:56.357699  350567 node_conditions.go:105] duration metric: took 2.935054ms to run NodePressure ...
	I1124 09:30:56.357714  350567 start.go:242] waiting for startup goroutines ...
	I1124 09:30:56.357729  350567 start.go:247] waiting for cluster config update ...
	I1124 09:30:56.357742  350567 start.go:256] writing updated cluster config ...
	I1124 09:30:56.358065  350567 ssh_runner.go:195] Run: rm -f paused
	I1124 09:30:56.361813  350567 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:56.364971  350567 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vgl62" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.368743  350567 pod_ready.go:94] pod "coredns-66bc5c9577-vgl62" is "Ready"
	I1124 09:30:56.368763  350567 pod_ready.go:86] duration metric: took 3.773601ms for pod "coredns-66bc5c9577-vgl62" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.370575  350567 pod_ready.go:83] waiting for pod "etcd-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.374046  350567 pod_ready.go:94] pod "etcd-embed-certs-673346" is "Ready"
	I1124 09:30:56.374066  350567 pod_ready.go:86] duration metric: took 3.473581ms for pod "etcd-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.375786  350567 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.379022  350567 pod_ready.go:94] pod "kube-apiserver-embed-certs-673346" is "Ready"
	I1124 09:30:56.379041  350567 pod_ready.go:86] duration metric: took 3.236137ms for pod "kube-apiserver-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.380667  350567 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.765494  350567 pod_ready.go:94] pod "kube-controller-manager-embed-certs-673346" is "Ready"
	I1124 09:30:56.765522  350567 pod_ready.go:86] duration metric: took 384.833249ms for pod "kube-controller-manager-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:56.965563  350567 pod_ready.go:83] waiting for pod "kube-proxy-m54gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.365568  350567 pod_ready.go:94] pod "kube-proxy-m54gs" is "Ready"
	I1124 09:30:57.365594  350567 pod_ready.go:86] duration metric: took 400.007869ms for pod "kube-proxy-m54gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.566275  350567 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.966130  350567 pod_ready.go:94] pod "kube-scheduler-embed-certs-673346" is "Ready"
	I1124 09:30:57.966154  350567 pod_ready.go:86] duration metric: took 399.858862ms for pod "kube-scheduler-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:30:57.966168  350567 pod_ready.go:40] duration metric: took 1.604321652s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:30:58.012793  350567 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:30:58.014766  350567 out.go:179] * Done! kubectl is now configured to use "embed-certs-673346" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.944842711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.945032835Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/facbdeb260606a1d4513f748d22dfcd0b47bdcf6695c6169eba61e4b3801e2bb/merged/etc/passwd: no such file or directory"
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.945067925Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/facbdeb260606a1d4513f748d22dfcd0b47bdcf6695c6169eba61e4b3801e2bb/merged/etc/group: no such file or directory"
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.945392863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.972254661Z" level=info msg="Created container 2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6: kube-system/storage-provisioner/storage-provisioner" id=b4164a14-e737-4897-9ecb-87a7e9f088a5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.972893518Z" level=info msg="Starting container: 2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6" id=dde07ce9-d76a-45ba-99c4-5188042a1879 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:30:49 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:30:49.974758842Z" level=info msg="Started container" PID=1708 containerID=2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6 description=kube-system/storage-provisioner/storage-provisioner id=dde07ce9-d76a-45ba-99c4-5188042a1879 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd16e913a5558a47c96ac727c0ac6fbb13f439f5653eaf3e7c71cbb1f46c9347
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.203828191Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.208119456Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.208147384Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.208172571Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.211844652Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.211869175Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.211895059Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.215752066Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.215773423Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.215790247Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.219211882Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.219239645Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.219259502Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.22269139Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.222716546Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.22273718Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.226039997Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:00 default-k8s-diff-port-164377 crio[567]: time="2025-11-24T09:31:00.226069041Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	2beca2bae5993       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   bd16e913a5558       storage-provisioner                                    kube-system
	f4d186acf5471       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   46cf238b43e84       dashboard-metrics-scraper-6ffb444bf9-vv9dn             kubernetes-dashboard
	185c940805d0b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   b792fb5e586ff       kubernetes-dashboard-855c9754f9-2hcnx                  kubernetes-dashboard
	1fe2a5522f1a1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   b3c7c0fcea6db       busybox                                                default
	8fd30725243cf       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   731d4a9783396       coredns-66bc5c9577-gn9zx                               kube-system
	d1d380122c482       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   923fea32a34af       kube-proxy-2vm2s                                       kube-system
	35c10215bee00       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   bd16e913a5558       storage-provisioner                                    kube-system
	cd5ff3cd6ed0a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   588108ec218f4       kindnet-kwvs7                                          kube-system
	dc45893b62892       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           56 seconds ago      Running             kube-scheduler              0                   302d82401367d       kube-scheduler-default-k8s-diff-port-164377            kube-system
	892148c6b52c0       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           56 seconds ago      Running             kube-controller-manager     0                   d1b3f6865e5ea       kube-controller-manager-default-k8s-diff-port-164377   kube-system
	4b1b6ab34f1c3       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           56 seconds ago      Running             kube-apiserver              0                   9757bc356b892       kube-apiserver-default-k8s-diff-port-164377            kube-system
	4e50e4dcdd36d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   1d778d2764400       etcd-default-k8s-diff-port-164377                      kube-system
	
	
	==> coredns [8fd30725243cf74c22d0fc9ddf7c963a305c1829d64d4bfeaa81eec4f11cb627] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38974 - 37521 "HINFO IN 1023710936344039766.7822191422356292246. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012097427s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-164377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-164377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=default-k8s-diff-port-164377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_29_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:29:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-164377
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:30:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:30:59 +0000   Mon, 24 Nov 2025 09:29:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:30:59 +0000   Mon, 24 Nov 2025 09:29:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:30:59 +0000   Mon, 24 Nov 2025 09:29:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:30:59 +0000   Mon, 24 Nov 2025 09:29:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-164377
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                89405ce7-5c63-4de3-9dc9-d223bdf4644b
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-gn9zx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-164377                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-kwvs7                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-164377             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-164377    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-2vm2s                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-164377             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vv9dn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2hcnx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node default-k8s-diff-port-164377 event: Registered Node default-k8s-diff-port-164377 in Controller
	  Normal  NodeReady                98s                kubelet          Node default-k8s-diff-port-164377 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-164377 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node default-k8s-diff-port-164377 event: Registered Node default-k8s-diff-port-164377 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [4e50e4dcdd36d70659c0b0c4aa3575f4fb33b8b6719a7f096ead37ec60f7dca7] <==
	{"level":"info","ts":"2025-11-24T09:30:19.160092Z","caller":"traceutil/trace.go:172","msg":"trace[1565720357] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:441; }","duration":"334.240596ms","start":"2025-11-24T09:30:18.825834Z","end":"2025-11-24T09:30:19.160075Z","steps":["trace[1565720357] 'agreement among raft nodes before linearized reading'  (duration: 183.592552ms)","trace[1565720357] 'range keys from in-memory index tree'  (duration: 150.381511ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:30:19.160133Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T09:30:18.825829Z","time spent":"334.290788ms","remote":"127.0.0.1:60150","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":202,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 "}
	{"level":"warn","ts":"2025-11-24T09:30:19.160441Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.422902ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597273599602564 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" mod_revision:390 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" value_size:6084 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:30:19.160559Z","caller":"traceutil/trace.go:172","msg":"trace[1114847738] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"372.686172ms","start":"2025-11-24T09:30:18.787846Z","end":"2025-11-24T09:30:19.160532Z","steps":["trace[1114847738] 'process raft request'  (duration: 221.619761ms)","trace[1114847738] 'compare'  (duration: 150.309022ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:30:19.160615Z","caller":"traceutil/trace.go:172","msg":"trace[519783473] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:460; }","duration":"151.259118ms","start":"2025-11-24T09:30:19.009347Z","end":"2025-11-24T09:30:19.160606Z","steps":["trace[519783473] 'read index received'  (duration: 27.932µs)","trace[519783473] 'applied index is now lower than readState.Index'  (duration: 151.230743ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:30:19.160626Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T09:30:18.787827Z","time spent":"372.761296ms","remote":"127.0.0.1:60106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6152,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" mod_revision:390 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" value_size:6084 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-164377\" > >"}
	{"level":"info","ts":"2025-11-24T09:30:19.160683Z","caller":"traceutil/trace.go:172","msg":"trace[1473366631] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"338.533525ms","start":"2025-11-24T09:30:18.822144Z","end":"2025-11-24T09:30:19.160677Z","steps":["trace[1473366631] 'process raft request'  (duration: 338.382471ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.160737Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T09:30:18.822117Z","time spent":"338.586901ms","remote":"127.0.0.1:60240","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-164377\" mod_revision:439 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-164377\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-164377\" > >"}
	{"level":"info","ts":"2025-11-24T09:30:19.160739Z","caller":"traceutil/trace.go:172","msg":"trace[1434946300] transaction","detail":"{read_only:false; number_of_response:0; response_revision:442; }","duration":"362.587978ms","start":"2025-11-24T09:30:18.798141Z","end":"2025-11-24T09:30:19.160729Z","steps":["trace[1434946300] 'process raft request'  (duration: 362.349414ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.160817Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T09:30:18.798117Z","time spent":"362.64817ms","remote":"127.0.0.1:60094","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/minions/default-k8s-diff-port-164377\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/default-k8s-diff-port-164377\" value_size:4255 >> failure:<>"}
	{"level":"warn","ts":"2025-11-24T09:30:19.160926Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"291.173708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2025-11-24T09:30:19.160954Z","caller":"traceutil/trace.go:172","msg":"trace[1760795215] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:444; }","duration":"291.204856ms","start":"2025-11-24T09:30:18.869741Z","end":"2025-11-24T09:30:19.160946Z","steps":["trace[1760795215] 'agreement among raft nodes before linearized reading'  (duration: 291.105426ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.161063Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"234.130378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"warn","ts":"2025-11-24T09:30:19.161100Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"292.101948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-kwvs7\" limit:1 ","response":"range_response_count:1 size:5431"}
	{"level":"info","ts":"2025-11-24T09:30:19.161120Z","caller":"traceutil/trace.go:172","msg":"trace[827457512] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:444; }","duration":"234.196472ms","start":"2025-11-24T09:30:18.926915Z","end":"2025-11-24T09:30:19.161111Z","steps":["trace[827457512] 'agreement among raft nodes before linearized reading'  (duration: 234.088335ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:30:19.161153Z","caller":"traceutil/trace.go:172","msg":"trace[1357198967] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-kwvs7; range_end:; response_count:1; response_revision:444; }","duration":"292.152369ms","start":"2025-11-24T09:30:18.868981Z","end":"2025-11-24T09:30:19.161133Z","steps":["trace[1357198967] 'agreement among raft nodes before linearized reading'  (duration: 291.934913ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.161164Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"234.271667ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-24T09:30:19.161189Z","caller":"traceutil/trace.go:172","msg":"trace[2061821447] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:444; }","duration":"234.300263ms","start":"2025-11-24T09:30:18.926882Z","end":"2025-11-24T09:30:19.161182Z","steps":["trace[2061821447] 'agreement among raft nodes before linearized reading'  (duration: 234.191561ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.161221Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.109366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2025-11-24T09:30:19.161233Z","caller":"traceutil/trace.go:172","msg":"trace[907757436] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"330.218453ms","start":"2025-11-24T09:30:18.831006Z","end":"2025-11-24T09:30:19.161224Z","steps":["trace[907757436] 'process raft request'  (duration: 329.564455ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:30:19.161246Z","caller":"traceutil/trace.go:172","msg":"trace[1598513077] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:444; }","duration":"237.138703ms","start":"2025-11-24T09:30:18.924101Z","end":"2025-11-24T09:30:19.161240Z","steps":["trace[1598513077] 'agreement among raft nodes before linearized reading'  (duration: 237.063136ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:30:19.161284Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T09:30:18.830986Z","time spent":"330.26498ms","remote":"127.0.0.1:60240","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-f57ddjn4mhkeibnj2dianoa7ju\" mod_revision:435 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-f57ddjn4mhkeibnj2dianoa7ju\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-f57ddjn4mhkeibnj2dianoa7ju\" > >"}
	{"level":"info","ts":"2025-11-24T09:30:19.584306Z","caller":"traceutil/trace.go:172","msg":"trace[689755430] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"124.922943ms","start":"2025-11-24T09:30:19.459364Z","end":"2025-11-24T09:30:19.584287Z","steps":["trace[689755430] 'process raft request'  (duration: 124.854602ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:30:19.584348Z","caller":"traceutil/trace.go:172","msg":"trace[80575847] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"125.649582ms","start":"2025-11-24T09:30:19.458651Z","end":"2025-11-24T09:30:19.584300Z","steps":["trace[80575847] 'process raft request'  (duration: 115.554436ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:31:11 up  1:13,  0 user,  load average: 2.60, 3.21, 2.25
	Linux default-k8s-diff-port-164377 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd5ff3cd6ed0a4412d7185ce32dcfa542107181ea6781701296539e88ec8c7f1] <==
	I1124 09:30:19.930268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:30:19.930888       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 09:30:19.931150       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:30:19.931172       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:30:19.931194       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:30:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:30:20.203819       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:30:20.203902       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:30:20.203915       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:30:20.204038       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 09:30:50.204544       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 09:30:50.204811       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 09:30:50.204935       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 09:30:50.205130       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1124 09:30:51.804083       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:30:51.804118       1 metrics.go:72] Registering metrics
	I1124 09:30:51.804210       1 controller.go:711] "Syncing nftables rules"
	I1124 09:31:00.203511       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:31:00.203581       1 main.go:301] handling current node
	I1124 09:31:10.203644       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:31:10.203681       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4b1b6ab34f1c336ff78c78e933ed7c102ff703bc94cb72f01593416f04ee9bfe] <==
	I1124 09:30:18.781904       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 09:30:18.782043       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 09:30:18.782053       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 09:30:18.789788       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 09:30:18.793397       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 09:30:18.793428       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 09:30:18.793456       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:30:18.793480       1 aggregator.go:171] initial CRD sync complete...
	I1124 09:30:18.793494       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 09:30:18.793502       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:30:18.793508       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:30:18.815087       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:30:18.821452       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:30:18.868213       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1124 09:30:19.163444       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:30:19.164878       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:30:19.293736       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:30:19.458159       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:30:19.596490       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:30:19.695834       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:30:19.755082       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.205.67"}
	I1124 09:30:19.786706       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.69.182"}
	I1124 09:30:22.190533       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:30:22.443276       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:30:22.741156       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [892148c6b52c0c3461c363973c52f2b385b42471846bdb218d5762957f002d99] <==
	I1124 09:30:22.099370       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 09:30:22.099482       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-164377"
	I1124 09:30:22.099570       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 09:30:22.100853       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 09:30:22.103305       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:30:22.104796       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 09:30:22.136626       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 09:30:22.136635       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:30:22.138063       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:30:22.138087       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:30:22.138196       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 09:30:22.138619       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 09:30:22.140017       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:30:22.141090       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:30:22.143281       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:30:22.144501       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:30:22.145589       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 09:30:22.150895       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:30:22.159018       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 09:30:22.159118       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 09:30:22.159179       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 09:30:22.159189       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 09:30:22.159198       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 09:30:22.160309       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:30:22.161326       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-proxy [d1d380122c4828f19d29ada5371570b902bc1915f5aa17fbda0cb5bb589a355f] <==
	I1124 09:30:19.836200       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:30:19.907979       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:30:20.008783       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:30:20.009373       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 09:30:20.009710       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:30:20.040240       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:30:20.040306       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:30:20.047437       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:30:20.048372       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:30:20.048510       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:30:20.050104       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:30:20.050152       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:30:20.050183       1 config.go:200] "Starting service config controller"
	I1124 09:30:20.050190       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:30:20.050225       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:30:20.050231       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:30:20.050257       1 config.go:309] "Starting node config controller"
	I1124 09:30:20.050262       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:30:20.151317       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:30:20.151328       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:30:20.151378       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:30:20.151393       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [dc45893b62892af90ad7db772627aa0cd25269cf53e738b0c779f8f21346bba8] <==
	I1124 09:30:16.276631       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:30:18.704038       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:30:18.704076       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:30:18.704103       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:30:18.704113       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:30:18.732935       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 09:30:18.732968       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:30:18.735976       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:30:18.736396       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:30:18.736486       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:30:18.736592       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:30:18.836784       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:30:22 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:22.273410     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 09:30:22 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:22.749448     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/26b38e76-0b44-4ea1-87db-97ff20b2a167-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-2hcnx\" (UID: \"26b38e76-0b44-4ea1-87db-97ff20b2a167\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2hcnx"
	Nov 24 09:30:22 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:22.749509     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glwx7\" (UniqueName: \"kubernetes.io/projected/26b38e76-0b44-4ea1-87db-97ff20b2a167-kube-api-access-glwx7\") pod \"kubernetes-dashboard-855c9754f9-2hcnx\" (UID: \"26b38e76-0b44-4ea1-87db-97ff20b2a167\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2hcnx"
	Nov 24 09:30:22 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:22.749540     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a2079b20-bcd6-49c4-97e8-93f5ee8c31d9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vv9dn\" (UID: \"a2079b20-bcd6-49c4-97e8-93f5ee8c31d9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn"
	Nov 24 09:30:22 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:22.749563     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ncq\" (UniqueName: \"kubernetes.io/projected/a2079b20-bcd6-49c4-97e8-93f5ee8c31d9-kube-api-access-r6ncq\") pod \"dashboard-metrics-scraper-6ffb444bf9-vv9dn\" (UID: \"a2079b20-bcd6-49c4-97e8-93f5ee8c31d9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn"
	Nov 24 09:30:25 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:25.867355     726 scope.go:117] "RemoveContainer" containerID="01d6d7efbe6af9d9e43cfba32ca7dc70d87a9d1315d618c17ba1e5fc7c8b083d"
	Nov 24 09:30:26 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:26.872635     726 scope.go:117] "RemoveContainer" containerID="01d6d7efbe6af9d9e43cfba32ca7dc70d87a9d1315d618c17ba1e5fc7c8b083d"
	Nov 24 09:30:26 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:26.872772     726 scope.go:117] "RemoveContainer" containerID="1f9f2b0a34c76d32d5e1ecd4dbb476a15fd4b26cb570717e5f02d2d890d335de"
	Nov 24 09:30:26 default-k8s-diff-port-164377 kubelet[726]: E1124 09:30:26.872964     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vv9dn_kubernetes-dashboard(a2079b20-bcd6-49c4-97e8-93f5ee8c31d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn" podUID="a2079b20-bcd6-49c4-97e8-93f5ee8c31d9"
	Nov 24 09:30:27 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:27.878380     726 scope.go:117] "RemoveContainer" containerID="1f9f2b0a34c76d32d5e1ecd4dbb476a15fd4b26cb570717e5f02d2d890d335de"
	Nov 24 09:30:27 default-k8s-diff-port-164377 kubelet[726]: E1124 09:30:27.878586     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vv9dn_kubernetes-dashboard(a2079b20-bcd6-49c4-97e8-93f5ee8c31d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn" podUID="a2079b20-bcd6-49c4-97e8-93f5ee8c31d9"
	Nov 24 09:30:28 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:28.893195     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2hcnx" podStartSLOduration=1.43430086 podStartE2EDuration="6.893169789s" podCreationTimestamp="2025-11-24 09:30:22 +0000 UTC" firstStartedPulling="2025-11-24 09:30:22.999073827 +0000 UTC m=+8.307348679" lastFinishedPulling="2025-11-24 09:30:28.457942755 +0000 UTC m=+13.766217608" observedRunningTime="2025-11-24 09:30:28.892874819 +0000 UTC m=+14.201149688" watchObservedRunningTime="2025-11-24 09:30:28.893169789 +0000 UTC m=+14.201444658"
	Nov 24 09:30:34 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:34.348296     726 scope.go:117] "RemoveContainer" containerID="1f9f2b0a34c76d32d5e1ecd4dbb476a15fd4b26cb570717e5f02d2d890d335de"
	Nov 24 09:30:34 default-k8s-diff-port-164377 kubelet[726]: E1124 09:30:34.348550     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vv9dn_kubernetes-dashboard(a2079b20-bcd6-49c4-97e8-93f5ee8c31d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn" podUID="a2079b20-bcd6-49c4-97e8-93f5ee8c31d9"
	Nov 24 09:30:47 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:47.799068     726 scope.go:117] "RemoveContainer" containerID="1f9f2b0a34c76d32d5e1ecd4dbb476a15fd4b26cb570717e5f02d2d890d335de"
	Nov 24 09:30:47 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:47.928852     726 scope.go:117] "RemoveContainer" containerID="1f9f2b0a34c76d32d5e1ecd4dbb476a15fd4b26cb570717e5f02d2d890d335de"
	Nov 24 09:30:47 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:47.929092     726 scope.go:117] "RemoveContainer" containerID="f4d186acf5471d5830a3d965311af6068f0c69ebb7cd8a9a5515a4e387886672"
	Nov 24 09:30:47 default-k8s-diff-port-164377 kubelet[726]: E1124 09:30:47.929352     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vv9dn_kubernetes-dashboard(a2079b20-bcd6-49c4-97e8-93f5ee8c31d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn" podUID="a2079b20-bcd6-49c4-97e8-93f5ee8c31d9"
	Nov 24 09:30:49 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:49.937428     726 scope.go:117] "RemoveContainer" containerID="35c10215bee00d6e5d470f828ed5ac25b6fbaadd21e5d6bb0919a63b77ec7273"
	Nov 24 09:30:54 default-k8s-diff-port-164377 kubelet[726]: I1124 09:30:54.348625     726 scope.go:117] "RemoveContainer" containerID="f4d186acf5471d5830a3d965311af6068f0c69ebb7cd8a9a5515a4e387886672"
	Nov 24 09:30:54 default-k8s-diff-port-164377 kubelet[726]: E1124 09:30:54.348843     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vv9dn_kubernetes-dashboard(a2079b20-bcd6-49c4-97e8-93f5ee8c31d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vv9dn" podUID="a2079b20-bcd6-49c4-97e8-93f5ee8c31d9"
	Nov 24 09:31:07 default-k8s-diff-port-164377 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:31:07 default-k8s-diff-port-164377 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:31:07 default-k8s-diff-port-164377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:31:07 default-k8s-diff-port-164377 systemd[1]: kubelet.service: Consumed 1.670s CPU time.
	
	
	==> kubernetes-dashboard [185c940805d0b7d87ece5c74083744172e4a580f1090813538cc221bc51f08ca] <==
	2025/11/24 09:30:28 Starting overwatch
	2025/11/24 09:30:28 Using namespace: kubernetes-dashboard
	2025/11/24 09:30:28 Using in-cluster config to connect to apiserver
	2025/11/24 09:30:28 Using secret token for csrf signing
	2025/11/24 09:30:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 09:30:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 09:30:28 Successful initial request to the apiserver, version: v1.34.2
	2025/11/24 09:30:28 Generating JWE encryption key
	2025/11/24 09:30:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 09:30:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 09:30:28 Initializing JWE encryption key from synchronized object
	2025/11/24 09:30:28 Creating in-cluster Sidecar client
	2025/11/24 09:30:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:30:28 Serving insecurely on HTTP port: 9090
	2025/11/24 09:30:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2beca2bae59934b14ecc5985e12b825e9d4921e48b17d11d39039d2a78ed30b6] <==
	I1124 09:30:49.987176       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:30:49.994857       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:30:49.994901       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:30:49.997126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:53.452409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:30:57.712973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:01.311735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:04.365141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:07.387855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:07.392861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:31:07.393050       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:31:07.393190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c66cc1a2-9dbe-4e90-b04e-0717d7b6501e", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-164377_1e66cfd6-8527-492e-9445-ae1968966606 became leader
	I1124 09:31:07.393253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-164377_1e66cfd6-8527-492e-9445-ae1968966606!
	W1124 09:31:07.395414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:07.399198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:31:07.493761       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-164377_1e66cfd6-8527-492e-9445-ae1968966606!
	W1124 09:31:09.402966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:09.407250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:11.410852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:31:11.416397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [35c10215bee00d6e5d470f828ed5ac25b6fbaadd21e5d6bb0919a63b77ec7273] <==
	I1124 09:30:19.731713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:30:49.740499       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377: exit status 2 (327.954297ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-164377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-673346 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-673346 --alsologtostderr -v=1: exit status 80 (2.18189962s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-673346 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:32:31.892738  364903 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:32:31.893036  364903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:32:31.893047  364903 out.go:374] Setting ErrFile to fd 2...
	I1124 09:32:31.893051  364903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:32:31.893238  364903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:32:31.893476  364903 out.go:368] Setting JSON to false
	I1124 09:32:31.893491  364903 mustload.go:66] Loading cluster: embed-certs-673346
	I1124 09:32:31.893829  364903 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:32:31.894207  364903 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:32:31.912492  364903 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:32:31.912749  364903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:32:31.969444  364903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-24 09:32:31.959480427 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:32:31.970056  364903 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-673346 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 09:32:31.973032  364903 out.go:179] * Pausing node embed-certs-673346 ... 
	I1124 09:32:31.974596  364903 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:32:31.974883  364903 ssh_runner.go:195] Run: systemctl --version
	I1124 09:32:31.974943  364903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:32:31.993378  364903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:32:32.093993  364903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:32:32.106217  364903 pause.go:52] kubelet running: true
	I1124 09:32:32.106273  364903 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:32:32.259129  364903 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:32:32.259222  364903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:32:32.326697  364903 cri.go:89] found id: "ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021"
	I1124 09:32:32.326720  364903 cri.go:89] found id: "6083f7f446b89e57a45f62eebdd583ec8a99f8d488f9cd0c1b548305759138c7"
	I1124 09:32:32.326725  364903 cri.go:89] found id: "78d705be98a49a5fe120b0a9c3141ff73fc6d07e2511cb87247a5748aca30a32"
	I1124 09:32:32.326731  364903 cri.go:89] found id: "88ef490a4f3640428064e1abfbc1d73994e1ca39d582198f86ac9e96a86d0f27"
	I1124 09:32:32.326735  364903 cri.go:89] found id: "bb23121fe4c9ac7e8ad0be18907e60b9c5b2eb812d63d624d25da5b7dfb249ec"
	I1124 09:32:32.326740  364903 cri.go:89] found id: "67fcf06dcde21ab07e7a837bc9a473ee62285d8a2d6635a1c8bc2e493d856937"
	I1124 09:32:32.326743  364903 cri.go:89] found id: "fb5135f68391c3fbceb14011ee8bf52afa63ce6643003ae1dafca7af2f115dee"
	I1124 09:32:32.326748  364903 cri.go:89] found id: "e1b77741f09d01f0ef1a9962f313baae8e3d4c96dca7d3562b4df468b38f9f10"
	I1124 09:32:32.326752  364903 cri.go:89] found id: "242bac3e334febc8fbbf763dd3a50726f7606373615446f99d7c1109238cebd9"
	I1124 09:32:32.326769  364903 cri.go:89] found id: "a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707"
	I1124 09:32:32.326778  364903 cri.go:89] found id: "c4f9f490192e6ccb95becea8d7ee298981dec29a5a19458ff560025392ebd167"
	I1124 09:32:32.326783  364903 cri.go:89] found id: ""
	I1124 09:32:32.326832  364903 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:32:32.339380  364903 retry.go:31] will retry after 361.934942ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:32:32Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:32:32.702050  364903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:32:32.715368  364903 pause.go:52] kubelet running: false
	I1124 09:32:32.715435  364903 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:32:32.851562  364903 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:32:32.851637  364903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:32:32.918460  364903 cri.go:89] found id: "ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021"
	I1124 09:32:32.918486  364903 cri.go:89] found id: "6083f7f446b89e57a45f62eebdd583ec8a99f8d488f9cd0c1b548305759138c7"
	I1124 09:32:32.918491  364903 cri.go:89] found id: "78d705be98a49a5fe120b0a9c3141ff73fc6d07e2511cb87247a5748aca30a32"
	I1124 09:32:32.918495  364903 cri.go:89] found id: "88ef490a4f3640428064e1abfbc1d73994e1ca39d582198f86ac9e96a86d0f27"
	I1124 09:32:32.918498  364903 cri.go:89] found id: "bb23121fe4c9ac7e8ad0be18907e60b9c5b2eb812d63d624d25da5b7dfb249ec"
	I1124 09:32:32.918501  364903 cri.go:89] found id: "67fcf06dcde21ab07e7a837bc9a473ee62285d8a2d6635a1c8bc2e493d856937"
	I1124 09:32:32.918504  364903 cri.go:89] found id: "fb5135f68391c3fbceb14011ee8bf52afa63ce6643003ae1dafca7af2f115dee"
	I1124 09:32:32.918507  364903 cri.go:89] found id: "e1b77741f09d01f0ef1a9962f313baae8e3d4c96dca7d3562b4df468b38f9f10"
	I1124 09:32:32.918509  364903 cri.go:89] found id: "242bac3e334febc8fbbf763dd3a50726f7606373615446f99d7c1109238cebd9"
	I1124 09:32:32.918515  364903 cri.go:89] found id: "a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707"
	I1124 09:32:32.918518  364903 cri.go:89] found id: "c4f9f490192e6ccb95becea8d7ee298981dec29a5a19458ff560025392ebd167"
	I1124 09:32:32.918520  364903 cri.go:89] found id: ""
	I1124 09:32:32.918565  364903 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:32:32.930561  364903 retry.go:31] will retry after 306.649595ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:32:32Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:32:33.238157  364903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:32:33.251259  364903 pause.go:52] kubelet running: false
	I1124 09:32:33.251322  364903 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:32:33.389273  364903 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:32:33.389377  364903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:32:33.455449  364903 cri.go:89] found id: "ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021"
	I1124 09:32:33.455469  364903 cri.go:89] found id: "6083f7f446b89e57a45f62eebdd583ec8a99f8d488f9cd0c1b548305759138c7"
	I1124 09:32:33.455473  364903 cri.go:89] found id: "78d705be98a49a5fe120b0a9c3141ff73fc6d07e2511cb87247a5748aca30a32"
	I1124 09:32:33.455479  364903 cri.go:89] found id: "88ef490a4f3640428064e1abfbc1d73994e1ca39d582198f86ac9e96a86d0f27"
	I1124 09:32:33.455483  364903 cri.go:89] found id: "bb23121fe4c9ac7e8ad0be18907e60b9c5b2eb812d63d624d25da5b7dfb249ec"
	I1124 09:32:33.455488  364903 cri.go:89] found id: "67fcf06dcde21ab07e7a837bc9a473ee62285d8a2d6635a1c8bc2e493d856937"
	I1124 09:32:33.455493  364903 cri.go:89] found id: "fb5135f68391c3fbceb14011ee8bf52afa63ce6643003ae1dafca7af2f115dee"
	I1124 09:32:33.455497  364903 cri.go:89] found id: "e1b77741f09d01f0ef1a9962f313baae8e3d4c96dca7d3562b4df468b38f9f10"
	I1124 09:32:33.455501  364903 cri.go:89] found id: "242bac3e334febc8fbbf763dd3a50726f7606373615446f99d7c1109238cebd9"
	I1124 09:32:33.455510  364903 cri.go:89] found id: "a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707"
	I1124 09:32:33.455518  364903 cri.go:89] found id: "c4f9f490192e6ccb95becea8d7ee298981dec29a5a19458ff560025392ebd167"
	I1124 09:32:33.455523  364903 cri.go:89] found id: ""
	I1124 09:32:33.455568  364903 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:32:33.467156  364903 retry.go:31] will retry after 311.223409ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:32:33Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:32:33.779572  364903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:32:33.792488  364903 pause.go:52] kubelet running: false
	I1124 09:32:33.792547  364903 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 09:32:33.928273  364903 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 09:32:33.928353  364903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 09:32:33.993550  364903 cri.go:89] found id: "ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021"
	I1124 09:32:33.993569  364903 cri.go:89] found id: "6083f7f446b89e57a45f62eebdd583ec8a99f8d488f9cd0c1b548305759138c7"
	I1124 09:32:33.993573  364903 cri.go:89] found id: "78d705be98a49a5fe120b0a9c3141ff73fc6d07e2511cb87247a5748aca30a32"
	I1124 09:32:33.993576  364903 cri.go:89] found id: "88ef490a4f3640428064e1abfbc1d73994e1ca39d582198f86ac9e96a86d0f27"
	I1124 09:32:33.993579  364903 cri.go:89] found id: "bb23121fe4c9ac7e8ad0be18907e60b9c5b2eb812d63d624d25da5b7dfb249ec"
	I1124 09:32:33.993582  364903 cri.go:89] found id: "67fcf06dcde21ab07e7a837bc9a473ee62285d8a2d6635a1c8bc2e493d856937"
	I1124 09:32:33.993585  364903 cri.go:89] found id: "fb5135f68391c3fbceb14011ee8bf52afa63ce6643003ae1dafca7af2f115dee"
	I1124 09:32:33.993587  364903 cri.go:89] found id: "e1b77741f09d01f0ef1a9962f313baae8e3d4c96dca7d3562b4df468b38f9f10"
	I1124 09:32:33.993590  364903 cri.go:89] found id: "242bac3e334febc8fbbf763dd3a50726f7606373615446f99d7c1109238cebd9"
	I1124 09:32:33.993595  364903 cri.go:89] found id: "a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707"
	I1124 09:32:33.993598  364903 cri.go:89] found id: "c4f9f490192e6ccb95becea8d7ee298981dec29a5a19458ff560025392ebd167"
	I1124 09:32:33.993601  364903 cri.go:89] found id: ""
	I1124 09:32:33.993642  364903 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:32:34.007973  364903 out.go:203] 
	W1124 09:32:34.009420  364903 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:32:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:32:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:32:34.009442  364903 out.go:285] * 
	* 
	W1124 09:32:34.013524  364903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:32:34.014916  364903 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-673346 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-673346
helpers_test.go:243: (dbg) docker inspect embed-certs-673346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794",
	        "Created": "2025-11-24T09:30:19.733597004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 362246,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:31:27.077416072Z",
	            "FinishedAt": "2025-11-24T09:31:26.210769426Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/hostname",
	        "HostsPath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/hosts",
	        "LogPath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794-json.log",
	        "Name": "/embed-certs-673346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-673346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-673346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794",
	                "LowerDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-673346",
	                "Source": "/var/lib/docker/volumes/embed-certs-673346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-673346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-673346",
	                "name.minikube.sigs.k8s.io": "embed-certs-673346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0ff4631d21e5350bee232253355a5aeb4019306e481a7219f940c057d2d34711",
	            "SandboxKey": "/var/run/docker/netns/0ff4631d21e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-673346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "97d1cb035a36ae5fa5c959009087829b88d672ef46bb4e02a32ec47d72e472d5",
	                    "EndpointID": "ad144b8152ab3f306ad9fdf38dfa825fc793cdd959453c9413862f8a08cd8a56",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "2e:fa:49:8f:59:48",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-673346",
	                        "1bda3483b0ff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-673346 -n embed-certs-673346
E1124 09:32:34.103519    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-673346 -n embed-certs-673346: exit status 2 (322.319687ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-673346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-673346 logs -n 25: (1.073788869s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                          │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ newest-cni-639420 image list --format=json                                                                                                                               │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p newest-cni-639420 --alsologtostderr -v=1                                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                   │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ no-preload-938348 image list --format=json                                                                                                                               │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p no-preload-938348 --alsologtostderr -v=1                                                                                                                              │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p newest-cni-639420                                                                                                                                                     │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p no-preload-938348                                                                                                                                                     │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p newest-cni-639420                                                                                                                                                     │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p no-preload-938348                                                                                                                                                     │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ default-k8s-diff-port-164377 image list --format=json                                                                                                                    │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-673346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-164377 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-673346 --alsologtostderr -v=3                                                                                                                             │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ delete  │ -p default-k8s-diff-port-164377                                                                                                                                          │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ delete  │ -p default-k8s-diff-port-164377                                                                                                                                          │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-673346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                   │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:32 UTC │
	│ image   │ embed-certs-673346 image list --format=json                                                                                                                              │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:32 UTC │ 24 Nov 25 09:32 UTC │
	│ pause   │ -p embed-certs-673346 --alsologtostderr -v=1                                                                                                                             │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:31:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:31:26.848184  362032 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:31:26.848478  362032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:31:26.848488  362032 out.go:374] Setting ErrFile to fd 2...
	I1124 09:31:26.848495  362032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:31:26.848717  362032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:31:26.849183  362032 out.go:368] Setting JSON to false
	I1124 09:31:26.850169  362032 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4433,"bootTime":1763972254,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:31:26.850231  362032 start.go:143] virtualization: kvm guest
	I1124 09:31:26.852143  362032 out.go:179] * [embed-certs-673346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:31:26.853632  362032 notify.go:221] Checking for updates...
	I1124 09:31:26.853646  362032 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:31:26.855028  362032 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:31:26.856327  362032 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:31:26.857613  362032 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:31:26.858761  362032 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:31:26.859973  362032 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:31:26.861514  362032 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:31:26.862135  362032 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:31:26.885601  362032 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:31:26.885717  362032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:31:26.943191  362032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-24 09:31:26.933368732 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:31:26.943396  362032 docker.go:319] overlay module found
	I1124 09:31:26.945145  362032 out.go:179] * Using the docker driver based on existing profile
	I1124 09:31:26.946390  362032 start.go:309] selected driver: docker
	I1124 09:31:26.946408  362032 start.go:927] validating driver "docker" against &{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:31:26.946540  362032 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:31:26.947165  362032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:31:27.006065  362032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-24 09:31:26.996811975 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:31:27.006352  362032 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:31:27.006399  362032 cni.go:84] Creating CNI manager for ""
	I1124 09:31:27.006457  362032 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:31:27.006504  362032 start.go:353] cluster config:
	{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:31:27.008380  362032 out.go:179] * Starting "embed-certs-673346" primary control-plane node in "embed-certs-673346" cluster
	I1124 09:31:27.009440  362032 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:31:27.010536  362032 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:31:27.011560  362032 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:31:27.011604  362032 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:31:27.011619  362032 cache.go:65] Caching tarball of preloaded images
	I1124 09:31:27.011641  362032 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:31:27.011716  362032 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:31:27.011734  362032 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:31:27.011832  362032 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:31:27.032890  362032 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:31:27.032907  362032 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:31:27.032925  362032 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:31:27.032962  362032 start.go:360] acquireMachinesLock for embed-certs-673346: {Name:mke42f7eda6495a6293833a93353c50b3546b267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:31:27.033013  362032 start.go:364] duration metric: took 35.471µs to acquireMachinesLock for "embed-certs-673346"
	I1124 09:31:27.033029  362032 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:31:27.033040  362032 fix.go:54] fixHost starting: 
	I1124 09:31:27.033289  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:27.050584  362032 fix.go:112] recreateIfNeeded on embed-certs-673346: state=Stopped err=<nil>
	W1124 09:31:27.050631  362032 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:31:27.052314  362032 out.go:252] * Restarting existing docker container for "embed-certs-673346" ...
	I1124 09:31:27.052403  362032 cli_runner.go:164] Run: docker start embed-certs-673346
	I1124 09:31:27.320493  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:27.340082  362032 kic.go:430] container "embed-certs-673346" state is running.
	I1124 09:31:27.340507  362032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:31:27.358821  362032 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:31:27.359083  362032 machine.go:94] provisionDockerMachine start ...
	I1124 09:31:27.359160  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:27.377380  362032 main.go:143] libmachine: Using SSH client type: native
	I1124 09:31:27.377636  362032 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 09:31:27.377647  362032 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:31:27.378287  362032 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47210->127.0.0.1:33138: read: connection reset by peer
	I1124 09:31:30.522612  362032 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-673346
	
	I1124 09:31:30.522650  362032 ubuntu.go:182] provisioning hostname "embed-certs-673346"
	I1124 09:31:30.522713  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:30.540979  362032 main.go:143] libmachine: Using SSH client type: native
	I1124 09:31:30.541207  362032 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 09:31:30.541221  362032 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-673346 && echo "embed-certs-673346" | sudo tee /etc/hostname
	I1124 09:31:30.693556  362032 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-673346
	
	I1124 09:31:30.693638  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:30.711160  362032 main.go:143] libmachine: Using SSH client type: native
	I1124 09:31:30.711450  362032 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 09:31:30.711475  362032 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-673346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-673346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-673346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:31:30.853911  362032 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:31:30.853949  362032 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 09:31:30.853990  362032 ubuntu.go:190] setting up certificates
	I1124 09:31:30.854011  362032 provision.go:84] configureAuth start
	I1124 09:31:30.854093  362032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:31:30.871999  362032 provision.go:143] copyHostCerts
	I1124 09:31:30.872069  362032 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem, removing ...
	I1124 09:31:30.872104  362032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem
	I1124 09:31:30.872190  362032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 09:31:30.872320  362032 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem, removing ...
	I1124 09:31:30.872345  362032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem
	I1124 09:31:30.872392  362032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 09:31:30.872497  362032 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem, removing ...
	I1124 09:31:30.872507  362032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem
	I1124 09:31:30.872547  362032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 09:31:30.872636  362032 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.embed-certs-673346 san=[127.0.0.1 192.168.76.2 embed-certs-673346 localhost minikube]
	I1124 09:31:30.970588  362032 provision.go:177] copyRemoteCerts
	I1124 09:31:30.970662  362032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:31:30.970712  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:30.988687  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:31.089551  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 09:31:31.106859  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 09:31:31.124101  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:31:31.141157  362032 provision.go:87] duration metric: took 287.129503ms to configureAuth
	I1124 09:31:31.141188  362032 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:31:31.141376  362032 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:31:31.141480  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:31.159126  362032 main.go:143] libmachine: Using SSH client type: native
	I1124 09:31:31.159363  362032 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 09:31:31.159381  362032 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:31:31.481439  362032 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:31:31.481464  362032 machine.go:97] duration metric: took 4.122363333s to provisionDockerMachine
	I1124 09:31:31.481478  362032 start.go:293] postStartSetup for "embed-certs-673346" (driver="docker")
	I1124 09:31:31.481488  362032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:31:31.481538  362032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:31:31.481575  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:31.500967  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:31.603046  362032 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:31:31.606547  362032 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:31:31.606570  362032 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:31:31.606583  362032 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:31:31.606635  362032 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:31:31.606723  362032 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:31:31.606847  362032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:31:31.614217  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:31:31.631463  362032 start.go:296] duration metric: took 149.973912ms for postStartSetup
	I1124 09:31:31.631552  362032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:31:31.631597  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:31.649744  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:31.748550  362032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:31:31.753099  362032 fix.go:56] duration metric: took 4.720052564s for fixHost
	I1124 09:31:31.753128  362032 start.go:83] releasing machines lock for "embed-certs-673346", held for 4.720104047s
	I1124 09:31:31.753195  362032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:31:31.770785  362032 ssh_runner.go:195] Run: cat /version.json
	I1124 09:31:31.770829  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:31.770867  362032 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:31:31.770938  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:31.791759  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:31.792013  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:31.888429  362032 ssh_runner.go:195] Run: systemctl --version
	I1124 09:31:31.939417  362032 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:31:31.973129  362032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:31:31.977773  362032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:31:31.977832  362032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:31:31.986229  362032 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:31:31.986256  362032 start.go:496] detecting cgroup driver to use...
	I1124 09:31:31.986298  362032 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:31:31.986357  362032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:31:31.999703  362032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:31:32.011653  362032 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:31:32.011697  362032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:31:32.025378  362032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:31:32.037823  362032 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:31:32.113069  362032 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:31:32.187822  362032 docker.go:234] disabling docker service ...
	I1124 09:31:32.187879  362032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:31:32.201614  362032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:31:32.213571  362032 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:31:32.290005  362032 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:31:32.366752  362032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:31:32.378941  362032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:31:32.392744  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:32.537638  362032 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:31:32.537697  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.547505  362032 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:31:32.547566  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.556327  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.564905  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.573349  362032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:31:32.580970  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.589411  362032 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.597243  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.605451  362032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:31:32.612572  362032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:31:32.620108  362032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:31:32.696934  362032 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:31:32.830887  362032 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:31:32.830950  362032 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:31:32.834872  362032 start.go:564] Will wait 60s for crictl version
	I1124 09:31:32.834928  362032 ssh_runner.go:195] Run: which crictl
	I1124 09:31:32.838317  362032 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:31:32.862746  362032 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:31:32.862814  362032 ssh_runner.go:195] Run: crio --version
	I1124 09:31:32.889304  362032 ssh_runner.go:195] Run: crio --version
	I1124 09:31:32.917997  362032 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 09:31:32.919175  362032 cli_runner.go:164] Run: docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:31:32.936616  362032 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 09:31:32.940665  362032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:31:32.950588  362032 kubeadm.go:884] updating cluster {Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:31:32.950869  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.105808  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.258045  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.405656  362032 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:31:33.405831  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.560389  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.711549  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.867141  362032 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:31:33.899459  362032 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:31:33.899486  362032 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:31:33.899537  362032 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:31:33.923591  362032 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:31:33.923620  362032 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:31:33.923629  362032 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1124 09:31:33.923756  362032 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-673346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:31:33.923842  362032 ssh_runner.go:195] Run: crio config
	I1124 09:31:33.968406  362032 cni.go:84] Creating CNI manager for ""
	I1124 09:31:33.968432  362032 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:31:33.968446  362032 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:31:33.968469  362032 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-673346 NodeName:embed-certs-673346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:31:33.968588  362032 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-673346"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:31:33.968649  362032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:31:33.977100  362032 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:31:33.977170  362032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:31:33.984786  362032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 09:31:33.997457  362032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:31:34.009658  362032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 09:31:34.021901  362032 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:31:34.025577  362032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:31:34.035327  362032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:31:34.113407  362032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:31:34.134774  362032 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346 for IP: 192.168.76.2
	I1124 09:31:34.134807  362032 certs.go:195] generating shared ca certs ...
	I1124 09:31:34.134828  362032 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:31:34.135018  362032 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:31:34.135098  362032 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:31:34.135113  362032 certs.go:257] generating profile certs ...
	I1124 09:31:34.135242  362032 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key
	I1124 09:31:34.135324  362032 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844
	I1124 09:31:34.135395  362032 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key
	I1124 09:31:34.135552  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:31:34.135596  362032 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:31:34.135607  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:31:34.135649  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:31:34.135683  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:31:34.135714  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:31:34.135774  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:31:34.136540  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:31:34.154721  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:31:34.173065  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:31:34.192259  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:31:34.216905  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 09:31:34.235326  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:31:34.252866  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:31:34.269818  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:31:34.286771  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:31:34.303819  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:31:34.321226  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:31:34.339013  362032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:31:34.350853  362032 ssh_runner.go:195] Run: openssl version
	I1124 09:31:34.356609  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:31:34.364761  362032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:31:34.368658  362032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:31:34.368700  362032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:31:34.402991  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:31:34.411570  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:31:34.420148  362032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:31:34.423823  362032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:31:34.423870  362032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:31:34.457560  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:31:34.465473  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:31:34.473738  362032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:31:34.477395  362032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:31:34.477450  362032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:31:34.511497  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:31:34.520078  362032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:31:34.523879  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:31:34.558318  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:31:34.592219  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:31:34.626535  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:31:34.673149  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:31:34.718656  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:31:34.773194  362032 kubeadm.go:401] StartCluster: {Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:31:34.773306  362032 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:31:34.773396  362032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:31:34.809346  362032 cri.go:89] found id: "67fcf06dcde21ab07e7a837bc9a473ee62285d8a2d6635a1c8bc2e493d856937"
	I1124 09:31:34.809372  362032 cri.go:89] found id: "fb5135f68391c3fbceb14011ee8bf52afa63ce6643003ae1dafca7af2f115dee"
	I1124 09:31:34.809379  362032 cri.go:89] found id: "e1b77741f09d01f0ef1a9962f313baae8e3d4c96dca7d3562b4df468b38f9f10"
	I1124 09:31:34.809384  362032 cri.go:89] found id: "242bac3e334febc8fbbf763dd3a50726f7606373615446f99d7c1109238cebd9"
	I1124 09:31:34.809388  362032 cri.go:89] found id: ""
	I1124 09:31:34.809436  362032 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:31:34.822307  362032 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:31:34Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:31:34.822444  362032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:31:34.830643  362032 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:31:34.830667  362032 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:31:34.830713  362032 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:31:34.838128  362032 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:31:34.838571  362032 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-673346" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:31:34.838682  362032 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-673346" cluster setting kubeconfig missing "embed-certs-673346" context setting]
	I1124 09:31:34.838938  362032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:31:34.840191  362032 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:31:34.847810  362032 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 09:31:34.847850  362032 kubeadm.go:602] duration metric: took 17.177062ms to restartPrimaryControlPlane
	I1124 09:31:34.847860  362032 kubeadm.go:403] duration metric: took 74.67789ms to StartCluster
	I1124 09:31:34.847881  362032 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:31:34.847948  362032 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:31:34.848787  362032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:31:34.849011  362032 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:31:34.849086  362032 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:31:34.849180  362032 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-673346"
	I1124 09:31:34.849205  362032 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-673346"
	I1124 09:31:34.849205  362032 addons.go:70] Setting dashboard=true in profile "embed-certs-673346"
	I1124 09:31:34.849221  362032 addons.go:70] Setting default-storageclass=true in profile "embed-certs-673346"
	I1124 09:31:34.849229  362032 addons.go:239] Setting addon dashboard=true in "embed-certs-673346"
	W1124 09:31:34.849238  362032 addons.go:248] addon dashboard should already be in state true
	I1124 09:31:34.849241  362032 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-673346"
	W1124 09:31:34.849214  362032 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:31:34.849256  362032 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:31:34.849267  362032 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:31:34.849299  362032 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:31:34.849584  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:34.849742  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:34.849788  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:34.852027  362032 out.go:179] * Verifying Kubernetes components...
	I1124 09:31:34.853083  362032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:31:34.874425  362032 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:31:34.875566  362032 addons.go:239] Setting addon default-storageclass=true in "embed-certs-673346"
	W1124 09:31:34.875585  362032 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:31:34.875610  362032 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:31:34.875671  362032 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:31:34.876065  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:34.876750  362032 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:31:34.876859  362032 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:31:34.876873  362032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:31:34.876918  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:34.877848  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:31:34.877870  362032 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:31:34.877917  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:34.909023  362032 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:31:34.909045  362032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:31:34.909190  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:34.911241  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:34.915016  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:34.935783  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:34.995449  362032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:31:35.008305  362032 node_ready.go:35] waiting up to 6m0s for node "embed-certs-673346" to be "Ready" ...
	I1124 09:31:35.028770  362032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:31:35.029702  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:31:35.029721  362032 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:31:35.043437  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:31:35.043456  362032 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:31:35.054923  362032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:31:35.058079  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:31:35.058102  362032 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:31:35.071956  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:31:35.071987  362032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:31:35.088622  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:31:35.088649  362032 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:31:35.102267  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:31:35.102322  362032 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:31:35.114797  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:31:35.114818  362032 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:31:35.128174  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:31:35.128201  362032 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:31:35.140805  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:31:35.140827  362032 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:31:35.153364  362032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:31:36.694819  362032 node_ready.go:49] node "embed-certs-673346" is "Ready"
	I1124 09:31:36.694857  362032 node_ready.go:38] duration metric: took 1.686439921s for node "embed-certs-673346" to be "Ready" ...
	I1124 09:31:36.694875  362032 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:31:36.694933  362032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:31:37.224682  362032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.195875666s)
	I1124 09:31:37.224742  362032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.169789571s)
	I1124 09:31:37.224864  362032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.071453546s)
	I1124 09:31:37.224887  362032 api_server.go:72] duration metric: took 2.375849138s to wait for apiserver process to appear ...
	I1124 09:31:37.224899  362032 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:31:37.224928  362032 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:31:37.232429  362032 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-673346 addons enable metrics-server
	
	I1124 09:31:37.233488  362032 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:31:37.233512  362032 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:31:37.255325  362032 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 09:31:37.259784  362032 addons.go:530] duration metric: took 2.410697418s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 09:31:37.725175  362032 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:31:37.729559  362032 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:31:37.729588  362032 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:31:38.225099  362032 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:31:38.229271  362032 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 09:31:38.230234  362032 api_server.go:141] control plane version: v1.34.2
	I1124 09:31:38.230254  362032 api_server.go:131] duration metric: took 1.005349195s to wait for apiserver health ...
	I1124 09:31:38.230271  362032 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:31:38.233863  362032 system_pods.go:59] 8 kube-system pods found
	I1124 09:31:38.233901  362032 system_pods.go:61] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:31:38.233910  362032 system_pods.go:61] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:31:38.233923  362032 system_pods.go:61] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:31:38.233933  362032 system_pods.go:61] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:31:38.233938  362032 system_pods.go:61] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:31:38.233946  362032 system_pods.go:61] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:31:38.233954  362032 system_pods.go:61] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:31:38.233961  362032 system_pods.go:61] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:31:38.233971  362032 system_pods.go:74] duration metric: took 3.692305ms to wait for pod list to return data ...
	I1124 09:31:38.233981  362032 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:31:38.236421  362032 default_sa.go:45] found service account: "default"
	I1124 09:31:38.236438  362032 default_sa.go:55] duration metric: took 2.450768ms for default service account to be created ...
	I1124 09:31:38.236446  362032 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:31:38.238821  362032 system_pods.go:86] 8 kube-system pods found
	I1124 09:31:38.238845  362032 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:31:38.238854  362032 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:31:38.238863  362032 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:31:38.238876  362032 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:31:38.238883  362032 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:31:38.238932  362032 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:31:38.238940  362032 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:31:38.238957  362032 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:31:38.238970  362032 system_pods.go:126] duration metric: took 2.518483ms to wait for k8s-apps to be running ...
	I1124 09:31:38.238983  362032 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:31:38.239038  362032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:31:38.264692  362032 system_svc.go:56] duration metric: took 25.701026ms WaitForService to wait for kubelet
	I1124 09:31:38.264717  362032 kubeadm.go:587] duration metric: took 3.41568124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:31:38.264734  362032 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:31:38.268707  362032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:31:38.268736  362032 node_conditions.go:123] node cpu capacity is 8
	I1124 09:31:38.268756  362032 node_conditions.go:105] duration metric: took 4.015342ms to run NodePressure ...
	I1124 09:31:38.268772  362032 start.go:242] waiting for startup goroutines ...
	I1124 09:31:38.268781  362032 start.go:247] waiting for cluster config update ...
	I1124 09:31:38.268795  362032 start.go:256] writing updated cluster config ...
	I1124 09:31:38.269094  362032 ssh_runner.go:195] Run: rm -f paused
	I1124 09:31:38.273880  362032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:31:38.277967  362032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vgl62" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:31:40.282794  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:42.284577  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:44.784144  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:47.283815  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:49.784049  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:52.282886  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:54.283843  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:56.784159  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:58.784300  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:01.284017  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:03.783463  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:06.283707  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:08.283939  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:10.785154  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:13.283485  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:15.284468  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:17.784236  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	I1124 09:32:18.282734  362032 pod_ready.go:94] pod "coredns-66bc5c9577-vgl62" is "Ready"
	I1124 09:32:18.282761  362032 pod_ready.go:86] duration metric: took 40.004767039s for pod "coredns-66bc5c9577-vgl62" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.285180  362032 pod_ready.go:83] waiting for pod "etcd-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.288776  362032 pod_ready.go:94] pod "etcd-embed-certs-673346" is "Ready"
	I1124 09:32:18.288798  362032 pod_ready.go:86] duration metric: took 3.591809ms for pod "etcd-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.290657  362032 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.294594  362032 pod_ready.go:94] pod "kube-apiserver-embed-certs-673346" is "Ready"
	I1124 09:32:18.294619  362032 pod_ready.go:86] duration metric: took 3.941216ms for pod "kube-apiserver-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.296507  362032 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.481375  362032 pod_ready.go:94] pod "kube-controller-manager-embed-certs-673346" is "Ready"
	I1124 09:32:18.481399  362032 pod_ready.go:86] duration metric: took 184.872787ms for pod "kube-controller-manager-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.681068  362032 pod_ready.go:83] waiting for pod "kube-proxy-m54gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:19.081542  362032 pod_ready.go:94] pod "kube-proxy-m54gs" is "Ready"
	I1124 09:32:19.081575  362032 pod_ready.go:86] duration metric: took 400.480986ms for pod "kube-proxy-m54gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:19.281722  362032 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:19.682036  362032 pod_ready.go:94] pod "kube-scheduler-embed-certs-673346" is "Ready"
	I1124 09:32:19.682062  362032 pod_ready.go:86] duration metric: took 400.314027ms for pod "kube-scheduler-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:19.682075  362032 pod_ready.go:40] duration metric: took 41.408164915s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:32:19.728447  362032 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:32:19.730232  362032 out.go:179] * Done! kubectl is now configured to use "embed-certs-673346" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:31:48 embed-certs-673346 crio[571]: time="2025-11-24T09:31:48.07159028Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:48 embed-certs-673346 crio[571]: time="2025-11-24T09:31:48.074967253Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:48 embed-certs-673346 crio[571]: time="2025-11-24T09:31:48.07498804Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.224029666Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=74ae10fc-1be4-49a5-9b9d-992c6eb4774b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.226563078Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=646e9ba4-63a5-4110-a647-a76e805cfc77 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.229227438Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw/dashboard-metrics-scraper" id=0e544738-b013-4c41-a075-ad22737735d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.229381021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.235912498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.236424353Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.267583473Z" level=info msg="Created container a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw/dashboard-metrics-scraper" id=0e544738-b013-4c41-a075-ad22737735d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.26823357Z" level=info msg="Starting container: a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707" id=f4ade467-b81b-46ef-a816-a3038503e64c name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.270401242Z" level=info msg="Started container" PID=1783 containerID=a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw/dashboard-metrics-scraper id=f4ade467-b81b-46ef-a816-a3038503e64c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6c2a8ea045b8493ea799331a7bb1bba937b8fab2811108ea2863497c2f114be
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.321472142Z" level=info msg="Removing container: f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1" id=e54df80f-8660-43fb-99a8-bb710e7dae66 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.331477734Z" level=info msg="Removed container f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw/dashboard-metrics-scraper" id=e54df80f-8660-43fb-99a8-bb710e7dae66 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.341366548Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7c1dad13-aaa8-4fd3-adb0-a1a79351687b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.342375Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a88fd756-33b6-45e3-bf9a-044f02421117 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.343509706Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=868e455b-bc5c-4c02-ac46-4e773429baeb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.34364432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.348136531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.34835627Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8bb15c74a471d34c8ffe0e0f39239c964174bfe5d8dcd0d615ccc24edcbccf7e/merged/etc/passwd: no such file or directory"
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.348397275Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8bb15c74a471d34c8ffe0e0f39239c964174bfe5d8dcd0d615ccc24edcbccf7e/merged/etc/group: no such file or directory"
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.348756607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.378015121Z" level=info msg="Created container ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021: kube-system/storage-provisioner/storage-provisioner" id=868e455b-bc5c-4c02-ac46-4e773429baeb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.378678149Z" level=info msg="Starting container: ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021" id=deca52cf-61da-4965-9dff-c32eefdc5f6a name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.380403288Z" level=info msg="Started container" PID=1797 containerID=ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021 description=kube-system/storage-provisioner/storage-provisioner id=deca52cf-61da-4965-9dff-c32eefdc5f6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8916e67a1b84b24b26e9be8d8d5353bb92cad0e993a2338a55d342b2a56ad7b1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ed427bb796914       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   8916e67a1b84b       storage-provisioner                          kube-system
	a6b9c373cf698       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           33 seconds ago       Exited              dashboard-metrics-scraper   2                   d6c2a8ea045b8       dashboard-metrics-scraper-6ffb444bf9-gwkxw   kubernetes-dashboard
	c4f9f490192e6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago       Running             kubernetes-dashboard        0                   65b245f5ad47f       kubernetes-dashboard-855c9754f9-sndp5        kubernetes-dashboard
	6083f7f446b89       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   cc3b9c097a9ea       coredns-66bc5c9577-vgl62                     kube-system
	a46f7a30227ec       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   6bbdfbee702f7       busybox                                      default
	78d705be98a49       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   d536d787e1c1a       kindnet-zm85n                                kube-system
	88ef490a4f364       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           57 seconds ago       Running             kube-proxy                  0                   52f90f4b66945       kube-proxy-m54gs                             kube-system
	bb23121fe4c9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   8916e67a1b84b       storage-provisioner                          kube-system
	67fcf06dcde21       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           About a minute ago   Running             kube-scheduler              0                   ec02092c8b542       kube-scheduler-embed-certs-673346            kube-system
	fb5135f68391c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           About a minute ago   Running             kube-apiserver              0                   699d288c8262c       kube-apiserver-embed-certs-673346            kube-system
	e1b77741f09d0       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   385b6156a1cb2       etcd-embed-certs-673346                      kube-system
	242bac3e334fe       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           About a minute ago   Running             kube-controller-manager     0                   bc8205ebdeb89       kube-controller-manager-embed-certs-673346   kube-system
	
	
	==> coredns [6083f7f446b89e57a45f62eebdd583ec8a99f8d488f9cd0c1b548305759138c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43560 - 30523 "HINFO IN 6920110998200931111.2568942224873600382. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036801363s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-673346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-673346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=embed-certs-673346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_30_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:30:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-673346
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:32:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:32:07 +0000   Mon, 24 Nov 2025 09:30:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:32:07 +0000   Mon, 24 Nov 2025 09:30:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:32:07 +0000   Mon, 24 Nov 2025 09:30:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:32:07 +0000   Mon, 24 Nov 2025 09:30:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-673346
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d639e906-b423-4ee2-aa7b-1de85e945d2c
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-vgl62                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-embed-certs-673346                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-zm85n                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-embed-certs-673346             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-embed-certs-673346    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-m54gs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-embed-certs-673346             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gwkxw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sndp5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 57s                  kube-proxy       
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node embed-certs-673346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node embed-certs-673346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node embed-certs-673346 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node embed-certs-673346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node embed-certs-673346 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node embed-certs-673346 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           113s                 node-controller  Node embed-certs-673346 event: Registered Node embed-certs-673346 in Controller
	  Normal  NodeReady                101s                 kubelet          Node embed-certs-673346 status is now: NodeReady
	  Normal  Starting                 61s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node embed-certs-673346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node embed-certs-673346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node embed-certs-673346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                  node-controller  Node embed-certs-673346 event: Registered Node embed-certs-673346 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [e1b77741f09d01f0ef1a9962f313baae8e3d4c96dca7d3562b4df468b38f9f10] <==
	{"level":"warn","ts":"2025-11-24T09:31:36.075538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.085363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.093004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.100328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.107848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.119505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.126919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.140854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.150436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.164177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.170475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.176753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.182794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.189564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.196796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.204158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.210433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.217970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.235772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.239168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.245651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.252060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.299987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46286","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T09:31:45.416917Z","caller":"traceutil/trace.go:172","msg":"trace[83210444] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"128.822645ms","start":"2025-11-24T09:31:45.288068Z","end":"2025-11-24T09:31:45.416891Z","steps":["trace[83210444] 'process raft request'  (duration: 109.747005ms)","trace[83210444] 'compare'  (duration: 18.952172ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:31:45.622182Z","caller":"traceutil/trace.go:172","msg":"trace[1037500426] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"108.141371ms","start":"2025-11-24T09:31:45.514017Z","end":"2025-11-24T09:31:45.622158Z","steps":["trace[1037500426] 'process raft request'  (duration: 87.931453ms)","trace[1037500426] 'compare'  (duration: 20.110816ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:32:35 up  1:15,  0 user,  load average: 0.94, 2.55, 2.10
	Linux embed-certs-673346 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78d705be98a49a5fe120b0a9c3141ff73fc6d07e2511cb87247a5748aca30a32] <==
	I1124 09:31:37.756773       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:31:37.757040       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 09:31:37.757234       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:31:37.757256       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:31:37.757284       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:31:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:31:37.983622       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:31:37.983654       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:31:37.983680       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:31:37.983811       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:31:38.483949       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:31:38.483988       1 metrics.go:72] Registering metrics
	I1124 09:31:38.484082       1 controller.go:711] "Syncing nftables rules"
	I1124 09:31:48.052127       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:31:48.052225       1 main.go:301] handling current node
	I1124 09:31:58.051596       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:31:58.051638       1 main.go:301] handling current node
	I1124 09:32:08.052017       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:32:08.052050       1 main.go:301] handling current node
	I1124 09:32:18.051488       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:32:18.051526       1 main.go:301] handling current node
	I1124 09:32:28.051705       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:32:28.051742       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fb5135f68391c3fbceb14011ee8bf52afa63ce6643003ae1dafca7af2f115dee] <==
	I1124 09:31:36.775544       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:31:36.775551       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:31:36.775685       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 09:31:36.775940       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 09:31:36.776807       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 09:31:36.776928       1 policy_source.go:240] refreshing policies
	I1124 09:31:36.778208       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 09:31:36.778261       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:31:36.778382       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 09:31:36.778445       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 09:31:36.778604       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 09:31:36.778854       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 09:31:36.782707       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 09:31:36.813393       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:31:37.047972       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:31:37.073663       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:31:37.091669       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:31:37.100623       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:31:37.106742       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:31:37.136730       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.247.146"}
	I1124 09:31:37.145679       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.13.54"}
	I1124 09:31:37.679285       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:31:40.217532       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:31:40.667281       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:31:40.816775       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [242bac3e334febc8fbbf763dd3a50726f7606373615446f99d7c1109238cebd9] <==
	I1124 09:31:40.181276       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 09:31:40.213765       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 09:31:40.213796       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 09:31:40.213825       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:31:40.213856       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 09:31:40.213941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:31:40.213959       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 09:31:40.213970       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 09:31:40.214207       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 09:31:40.214311       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:31:40.214311       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:31:40.214987       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 09:31:40.219003       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:31:40.219057       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:31:40.220814       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:31:40.223004       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 09:31:40.223139       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 09:31:40.223231       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-673346"
	I1124 09:31:40.223278       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 09:31:40.224371       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 09:31:40.226591       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:31:40.228837       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:31:40.230030       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:31:40.232375       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:31:40.236747       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [88ef490a4f3640428064e1abfbc1d73994e1ca39d582198f86ac9e96a86d0f27] <==
	I1124 09:31:37.616959       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:31:37.672015       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:31:37.772604       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:31:37.772635       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 09:31:37.772737       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:31:37.793586       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:31:37.793666       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:31:37.799489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:31:37.799841       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:31:37.799878       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:31:37.803616       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:31:37.803638       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:31:37.803663       1 config.go:309] "Starting node config controller"
	I1124 09:31:37.803672       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:31:37.803685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:31:37.803707       1 config.go:200] "Starting service config controller"
	I1124 09:31:37.803715       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:31:37.803770       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:31:37.803787       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:31:37.903719       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:31:37.903782       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:31:37.903889       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [67fcf06dcde21ab07e7a837bc9a473ee62285d8a2d6635a1c8bc2e493d856937] <==
	I1124 09:31:35.285695       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:31:36.695726       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:31:36.695763       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:31:36.695776       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:31:36.695786       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:31:36.733590       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 09:31:36.733624       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:31:36.736511       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:31:36.736579       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:31:36.736898       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:31:36.737017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:31:36.837189       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:31:40 embed-certs-673346 kubelet[733]: I1124 09:31:40.849321     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nvsz\" (UniqueName: \"kubernetes.io/projected/64591962-d4dc-4736-bf59-225893e09447-kube-api-access-7nvsz\") pod \"kubernetes-dashboard-855c9754f9-sndp5\" (UID: \"64591962-d4dc-4736-bf59-225893e09447\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sndp5"
	Nov 24 09:31:40 embed-certs-673346 kubelet[733]: I1124 09:31:40.849365     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e96ecf35-4937-4756-b450-f6c47f80fea3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gwkxw\" (UID: \"e96ecf35-4937-4756-b450-f6c47f80fea3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw"
	Nov 24 09:31:40 embed-certs-673346 kubelet[733]: I1124 09:31:40.849447     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxd2w\" (UniqueName: \"kubernetes.io/projected/e96ecf35-4937-4756-b450-f6c47f80fea3-kube-api-access-rxd2w\") pod \"dashboard-metrics-scraper-6ffb444bf9-gwkxw\" (UID: \"e96ecf35-4937-4756-b450-f6c47f80fea3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw"
	Nov 24 09:31:43 embed-certs-673346 kubelet[733]: I1124 09:31:43.269879     733 scope.go:117] "RemoveContainer" containerID="108f881ac432719fbca8dee367316360d21e77d8aa710164c46717dc916ebbf1"
	Nov 24 09:31:44 embed-certs-673346 kubelet[733]: I1124 09:31:44.274677     733 scope.go:117] "RemoveContainer" containerID="108f881ac432719fbca8dee367316360d21e77d8aa710164c46717dc916ebbf1"
	Nov 24 09:31:44 embed-certs-673346 kubelet[733]: I1124 09:31:44.274852     733 scope.go:117] "RemoveContainer" containerID="f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1"
	Nov 24 09:31:44 embed-certs-673346 kubelet[733]: E1124 09:31:44.275084     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:31:45 embed-certs-673346 kubelet[733]: I1124 09:31:45.279390     733 scope.go:117] "RemoveContainer" containerID="f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1"
	Nov 24 09:31:45 embed-certs-673346 kubelet[733]: E1124 09:31:45.279641     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:31:46 embed-certs-673346 kubelet[733]: I1124 09:31:46.292185     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sndp5" podStartSLOduration=1.701885267 podStartE2EDuration="6.292162651s" podCreationTimestamp="2025-11-24 09:31:40 +0000 UTC" firstStartedPulling="2025-11-24 09:31:41.10984002 +0000 UTC m=+6.972994511" lastFinishedPulling="2025-11-24 09:31:45.700117401 +0000 UTC m=+11.563271895" observedRunningTime="2025-11-24 09:31:46.292085779 +0000 UTC m=+12.155240294" watchObservedRunningTime="2025-11-24 09:31:46.292162651 +0000 UTC m=+12.155317163"
	Nov 24 09:31:47 embed-certs-673346 kubelet[733]: I1124 09:31:47.748888     733 scope.go:117] "RemoveContainer" containerID="f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1"
	Nov 24 09:31:47 embed-certs-673346 kubelet[733]: E1124 09:31:47.749063     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:32:01 embed-certs-673346 kubelet[733]: I1124 09:32:01.223464     733 scope.go:117] "RemoveContainer" containerID="f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1"
	Nov 24 09:32:01 embed-certs-673346 kubelet[733]: I1124 09:32:01.320092     733 scope.go:117] "RemoveContainer" containerID="f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1"
	Nov 24 09:32:01 embed-certs-673346 kubelet[733]: I1124 09:32:01.320318     733 scope.go:117] "RemoveContainer" containerID="a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707"
	Nov 24 09:32:01 embed-certs-673346 kubelet[733]: E1124 09:32:01.320597     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:32:07 embed-certs-673346 kubelet[733]: I1124 09:32:07.749497     733 scope.go:117] "RemoveContainer" containerID="a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707"
	Nov 24 09:32:07 embed-certs-673346 kubelet[733]: E1124 09:32:07.749693     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:32:08 embed-certs-673346 kubelet[733]: I1124 09:32:08.340914     733 scope.go:117] "RemoveContainer" containerID="bb23121fe4c9ac7e8ad0be18907e60b9c5b2eb812d63d624d25da5b7dfb249ec"
	Nov 24 09:32:21 embed-certs-673346 kubelet[733]: I1124 09:32:21.224154     733 scope.go:117] "RemoveContainer" containerID="a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707"
	Nov 24 09:32:21 embed-certs-673346 kubelet[733]: E1124 09:32:21.224415     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:32:32 embed-certs-673346 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:32:32 embed-certs-673346 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:32:32 embed-certs-673346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:32:32 embed-certs-673346 systemd[1]: kubelet.service: Consumed 1.781s CPU time.
	
	
	==> kubernetes-dashboard [c4f9f490192e6ccb95becea8d7ee298981dec29a5a19458ff560025392ebd167] <==
	2025/11/24 09:31:45 Using namespace: kubernetes-dashboard
	2025/11/24 09:31:45 Using in-cluster config to connect to apiserver
	2025/11/24 09:31:45 Using secret token for csrf signing
	2025/11/24 09:31:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 09:31:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 09:31:45 Successful initial request to the apiserver, version: v1.34.2
	2025/11/24 09:31:45 Generating JWE encryption key
	2025/11/24 09:31:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 09:31:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 09:31:45 Initializing JWE encryption key from synchronized object
	2025/11/24 09:31:45 Creating in-cluster Sidecar client
	2025/11/24 09:31:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:31:45 Serving insecurely on HTTP port: 9090
	2025/11/24 09:32:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:31:45 Starting overwatch
	
	
	==> storage-provisioner [bb23121fe4c9ac7e8ad0be18907e60b9c5b2eb812d63d624d25da5b7dfb249ec] <==
	I1124 09:31:37.590007       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:32:07.594507       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021] <==
	I1124 09:32:08.392772       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:32:08.399396       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:32:08.399435       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:32:08.401442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:11.856906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:16.117888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:19.716826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:22.770792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:25.792820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:25.798643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:32:25.798797       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:32:25.798953       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-673346_ee548368-2cee-4e0f-8542-dd3cd4a958a1!
	I1124 09:32:25.798941       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24e163fb-f470-4eb3-b56c-97d0ebe5b8c9", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-673346_ee548368-2cee-4e0f-8542-dd3cd4a958a1 became leader
	W1124 09:32:25.800888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:25.804133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:32:25.899243       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-673346_ee548368-2cee-4e0f-8542-dd3cd4a958a1!
	W1124 09:32:27.807116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:27.812137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:29.815735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:29.819680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:31.822474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:31.827113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:33.832600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:33.837502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-673346 -n embed-certs-673346
E1124 09:32:35.645197    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-673346 -n embed-certs-673346: exit status 2 (324.873521ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-673346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-673346
helpers_test.go:243: (dbg) docker inspect embed-certs-673346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794",
	        "Created": "2025-11-24T09:30:19.733597004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 362246,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:31:27.077416072Z",
	            "FinishedAt": "2025-11-24T09:31:26.210769426Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/hostname",
	        "HostsPath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/hosts",
	        "LogPath": "/var/lib/docker/containers/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794/1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794-json.log",
	        "Name": "/embed-certs-673346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-673346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-673346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1bda3483b0ffa7cc3666d10672ccf894ba4d0e190d4dc8846dffa13d7c075794",
	                "LowerDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9-init/diff:/var/lib/docker/overlay2/72f9c2462903d95c14e60bec0d0632c957bd2e5389fe4dc099fa9cd03c134379/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86c1a4d36defa189b20913c7ba1ef1d25e3ed9bb5e9967486b780845d18a09f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-673346",
	                "Source": "/var/lib/docker/volumes/embed-certs-673346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-673346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-673346",
	                "name.minikube.sigs.k8s.io": "embed-certs-673346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0ff4631d21e5350bee232253355a5aeb4019306e481a7219f940c057d2d34711",
	            "SandboxKey": "/var/run/docker/netns/0ff4631d21e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-673346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "97d1cb035a36ae5fa5c959009087829b88d672ef46bb4e02a32ec47d72e472d5",
	                    "EndpointID": "ad144b8152ab3f306ad9fdf38dfa825fc793cdd959453c9413862f8a08cd8a56",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "2e:fa:49:8f:59:48",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-673346",
	                        "1bda3483b0ff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-673346 -n embed-certs-673346
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-673346 -n embed-certs-673346: exit status 2 (320.762545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-673346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-673346 logs -n 25: (1.07993879s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ old-k8s-version-767267 image list --format=json                                                                                                                          │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-767267 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ newest-cni-639420 image list --format=json                                                                                                                               │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p newest-cni-639420 --alsologtostderr -v=1                                                                                                                              │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p old-k8s-version-767267                                                                                                                                                │ old-k8s-version-767267       │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                   │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ no-preload-938348 image list --format=json                                                                                                                               │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ pause   │ -p no-preload-938348 --alsologtostderr -v=1                                                                                                                              │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │                     │
	│ delete  │ -p newest-cni-639420                                                                                                                                                     │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p no-preload-938348                                                                                                                                                     │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p newest-cni-639420                                                                                                                                                     │ newest-cni-639420            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ delete  │ -p no-preload-938348                                                                                                                                                     │ no-preload-938348            │ jenkins │ v1.37.0 │ 24 Nov 25 09:30 UTC │ 24 Nov 25 09:30 UTC │
	│ image   │ default-k8s-diff-port-164377 image list --format=json                                                                                                                    │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-673346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-164377 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-673346 --alsologtostderr -v=3                                                                                                                             │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ delete  │ -p default-k8s-diff-port-164377                                                                                                                                          │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ delete  │ -p default-k8s-diff-port-164377                                                                                                                                          │ default-k8s-diff-port-164377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-673346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:31 UTC │
	│ start   │ -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                   │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:32 UTC │
	│ image   │ embed-certs-673346 image list --format=json                                                                                                                              │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:32 UTC │ 24 Nov 25 09:32 UTC │
	│ pause   │ -p embed-certs-673346 --alsologtostderr -v=1                                                                                                                             │ embed-certs-673346           │ jenkins │ v1.37.0 │ 24 Nov 25 09:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:31:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:31:26.848184  362032 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:31:26.848478  362032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:31:26.848488  362032 out.go:374] Setting ErrFile to fd 2...
	I1124 09:31:26.848495  362032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:31:26.848717  362032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:31:26.849183  362032 out.go:368] Setting JSON to false
	I1124 09:31:26.850169  362032 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4433,"bootTime":1763972254,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:31:26.850231  362032 start.go:143] virtualization: kvm guest
	I1124 09:31:26.852143  362032 out.go:179] * [embed-certs-673346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:31:26.853632  362032 notify.go:221] Checking for updates...
	I1124 09:31:26.853646  362032 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:31:26.855028  362032 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:31:26.856327  362032 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:31:26.857613  362032 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:31:26.858761  362032 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:31:26.859973  362032 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:31:26.861514  362032 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:31:26.862135  362032 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:31:26.885601  362032 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:31:26.885717  362032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:31:26.943191  362032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-24 09:31:26.933368732 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:31:26.943396  362032 docker.go:319] overlay module found
	I1124 09:31:26.945145  362032 out.go:179] * Using the docker driver based on existing profile
	I1124 09:31:26.946390  362032 start.go:309] selected driver: docker
	I1124 09:31:26.946408  362032 start.go:927] validating driver "docker" against &{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:31:26.946540  362032 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:31:26.947165  362032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:31:27.006065  362032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-24 09:31:26.996811975 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:31:27.006352  362032 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:31:27.006399  362032 cni.go:84] Creating CNI manager for ""
	I1124 09:31:27.006457  362032 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:31:27.006504  362032 start.go:353] cluster config:
	{Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:31:27.008380  362032 out.go:179] * Starting "embed-certs-673346" primary control-plane node in "embed-certs-673346" cluster
	I1124 09:31:27.009440  362032 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:31:27.010536  362032 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:31:27.011560  362032 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:31:27.011604  362032 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:31:27.011619  362032 cache.go:65] Caching tarball of preloaded images
	I1124 09:31:27.011641  362032 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:31:27.011716  362032 preload.go:238] Found /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:31:27.011734  362032 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:31:27.011832  362032 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:31:27.032890  362032 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:31:27.032907  362032 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:31:27.032925  362032 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:31:27.032962  362032 start.go:360] acquireMachinesLock for embed-certs-673346: {Name:mke42f7eda6495a6293833a93353c50b3546b267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:31:27.033013  362032 start.go:364] duration metric: took 35.471µs to acquireMachinesLock for "embed-certs-673346"
	I1124 09:31:27.033029  362032 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:31:27.033040  362032 fix.go:54] fixHost starting: 
	I1124 09:31:27.033289  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:27.050584  362032 fix.go:112] recreateIfNeeded on embed-certs-673346: state=Stopped err=<nil>
	W1124 09:31:27.050631  362032 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:31:27.052314  362032 out.go:252] * Restarting existing docker container for "embed-certs-673346" ...
	I1124 09:31:27.052403  362032 cli_runner.go:164] Run: docker start embed-certs-673346
	I1124 09:31:27.320493  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:27.340082  362032 kic.go:430] container "embed-certs-673346" state is running.
	I1124 09:31:27.340507  362032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:31:27.358821  362032 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/config.json ...
	I1124 09:31:27.359083  362032 machine.go:94] provisionDockerMachine start ...
	I1124 09:31:27.359160  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:27.377380  362032 main.go:143] libmachine: Using SSH client type: native
	I1124 09:31:27.377636  362032 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 09:31:27.377647  362032 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:31:27.378287  362032 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47210->127.0.0.1:33138: read: connection reset by peer
	I1124 09:31:30.522612  362032 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-673346
	
	I1124 09:31:30.522650  362032 ubuntu.go:182] provisioning hostname "embed-certs-673346"
	I1124 09:31:30.522713  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:30.540979  362032 main.go:143] libmachine: Using SSH client type: native
	I1124 09:31:30.541207  362032 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 09:31:30.541221  362032 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-673346 && echo "embed-certs-673346" | sudo tee /etc/hostname
	I1124 09:31:30.693556  362032 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-673346
	
	I1124 09:31:30.693638  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:30.711160  362032 main.go:143] libmachine: Using SSH client type: native
	I1124 09:31:30.711450  362032 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 09:31:30.711475  362032 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-673346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-673346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-673346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:31:30.853911  362032 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:31:30.853949  362032 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5690/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5690/.minikube}
	I1124 09:31:30.853990  362032 ubuntu.go:190] setting up certificates
	I1124 09:31:30.854011  362032 provision.go:84] configureAuth start
	I1124 09:31:30.854093  362032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:31:30.871999  362032 provision.go:143] copyHostCerts
	I1124 09:31:30.872069  362032 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem, removing ...
	I1124 09:31:30.872104  362032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem
	I1124 09:31:30.872190  362032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/ca.pem (1082 bytes)
	I1124 09:31:30.872320  362032 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem, removing ...
	I1124 09:31:30.872345  362032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem
	I1124 09:31:30.872392  362032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/cert.pem (1123 bytes)
	I1124 09:31:30.872497  362032 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem, removing ...
	I1124 09:31:30.872507  362032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem
	I1124 09:31:30.872547  362032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5690/.minikube/key.pem (1679 bytes)
	I1124 09:31:30.872636  362032 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem org=jenkins.embed-certs-673346 san=[127.0.0.1 192.168.76.2 embed-certs-673346 localhost minikube]
	I1124 09:31:30.970588  362032 provision.go:177] copyRemoteCerts
	I1124 09:31:30.970662  362032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:31:30.970712  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:30.988687  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:31.089551  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 09:31:31.106859  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 09:31:31.124101  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:31:31.141157  362032 provision.go:87] duration metric: took 287.129503ms to configureAuth
	I1124 09:31:31.141188  362032 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:31:31.141376  362032 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:31:31.141480  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:31.159126  362032 main.go:143] libmachine: Using SSH client type: native
	I1124 09:31:31.159363  362032 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 09:31:31.159381  362032 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:31:31.481439  362032 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:31:31.481464  362032 machine.go:97] duration metric: took 4.122363333s to provisionDockerMachine
	I1124 09:31:31.481478  362032 start.go:293] postStartSetup for "embed-certs-673346" (driver="docker")
	I1124 09:31:31.481488  362032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:31:31.481538  362032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:31:31.481575  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:31.500967  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:31.603046  362032 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:31:31.606547  362032 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:31:31.606570  362032 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:31:31.606583  362032 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/addons for local assets ...
	I1124 09:31:31.606635  362032 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5690/.minikube/files for local assets ...
	I1124 09:31:31.606723  362032 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem -> 92432.pem in /etc/ssl/certs
	I1124 09:31:31.606847  362032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:31:31.614217  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:31:31.631463  362032 start.go:296] duration metric: took 149.973912ms for postStartSetup
	I1124 09:31:31.631552  362032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:31:31.631597  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:31.649744  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:31.748550  362032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:31:31.753099  362032 fix.go:56] duration metric: took 4.720052564s for fixHost
	I1124 09:31:31.753128  362032 start.go:83] releasing machines lock for "embed-certs-673346", held for 4.720104047s
	I1124 09:31:31.753195  362032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-673346
	I1124 09:31:31.770785  362032 ssh_runner.go:195] Run: cat /version.json
	I1124 09:31:31.770829  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:31.770867  362032 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:31:31.770938  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:31.791759  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:31.792013  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:31.888429  362032 ssh_runner.go:195] Run: systemctl --version
	I1124 09:31:31.939417  362032 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:31:31.973129  362032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:31:31.977773  362032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:31:31.977832  362032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:31:31.986229  362032 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:31:31.986256  362032 start.go:496] detecting cgroup driver to use...
	I1124 09:31:31.986298  362032 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:31:31.986357  362032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:31:31.999703  362032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:31:32.011653  362032 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:31:32.011697  362032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:31:32.025378  362032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:31:32.037823  362032 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:31:32.113069  362032 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:31:32.187822  362032 docker.go:234] disabling docker service ...
	I1124 09:31:32.187879  362032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:31:32.201614  362032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:31:32.213571  362032 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:31:32.290005  362032 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:31:32.366752  362032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:31:32.378941  362032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:31:32.392744  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:32.537638  362032 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:31:32.537697  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.547505  362032 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 09:31:32.547566  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.556327  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.564905  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.573349  362032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:31:32.580970  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.589411  362032 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.597243  362032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:31:32.605451  362032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:31:32.612572  362032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:31:32.620108  362032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:31:32.696934  362032 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:31:32.830887  362032 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:31:32.830950  362032 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:31:32.834872  362032 start.go:564] Will wait 60s for crictl version
	I1124 09:31:32.834928  362032 ssh_runner.go:195] Run: which crictl
	I1124 09:31:32.838317  362032 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:31:32.862746  362032 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:31:32.862814  362032 ssh_runner.go:195] Run: crio --version
	I1124 09:31:32.889304  362032 ssh_runner.go:195] Run: crio --version
	I1124 09:31:32.917997  362032 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 09:31:32.919175  362032 cli_runner.go:164] Run: docker network inspect embed-certs-673346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:31:32.936616  362032 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 09:31:32.940665  362032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:31:32.950588  362032 kubeadm.go:884] updating cluster {Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:31:32.950869  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.105808  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.258045  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.405656  362032 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:31:33.405831  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.560389  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.711549  362032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:31:33.867141  362032 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:31:33.899459  362032 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:31:33.899486  362032 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:31:33.899537  362032 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:31:33.923591  362032 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:31:33.923620  362032 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:31:33.923629  362032 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1124 09:31:33.923756  362032 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-673346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:31:33.923842  362032 ssh_runner.go:195] Run: crio config
	I1124 09:31:33.968406  362032 cni.go:84] Creating CNI manager for ""
	I1124 09:31:33.968432  362032 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:31:33.968446  362032 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:31:33.968469  362032 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-673346 NodeName:embed-certs-673346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:31:33.968588  362032 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-673346"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:31:33.968649  362032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:31:33.977100  362032 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:31:33.977170  362032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:31:33.984786  362032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 09:31:33.997457  362032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:31:34.009658  362032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 09:31:34.021901  362032 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:31:34.025577  362032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:31:34.035327  362032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:31:34.113407  362032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:31:34.134774  362032 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346 for IP: 192.168.76.2
	I1124 09:31:34.134807  362032 certs.go:195] generating shared ca certs ...
	I1124 09:31:34.134828  362032 certs.go:227] acquiring lock for ca certs: {Name:mk6f5b28854247012e3e4ea1e8d89c404d793ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:31:34.135018  362032 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key
	I1124 09:31:34.135098  362032 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key
	I1124 09:31:34.135113  362032 certs.go:257] generating profile certs ...
	I1124 09:31:34.135242  362032 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/client.key
	I1124 09:31:34.135324  362032 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key.f0325844
	I1124 09:31:34.135395  362032 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key
	I1124 09:31:34.135552  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem (1338 bytes)
	W1124 09:31:34.135596  362032 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243_empty.pem, impossibly tiny 0 bytes
	I1124 09:31:34.135607  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:31:34.135649  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:31:34.135683  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:31:34.135714  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/certs/key.pem (1679 bytes)
	I1124 09:31:34.135774  362032 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem (1708 bytes)
	I1124 09:31:34.136540  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:31:34.154721  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:31:34.173065  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:31:34.192259  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 09:31:34.216905  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 09:31:34.235326  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:31:34.252866  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:31:34.269818  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/embed-certs-673346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:31:34.286771  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/certs/9243.pem --> /usr/share/ca-certificates/9243.pem (1338 bytes)
	I1124 09:31:34.303819  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/ssl/certs/92432.pem --> /usr/share/ca-certificates/92432.pem (1708 bytes)
	I1124 09:31:34.321226  362032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:31:34.339013  362032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:31:34.350853  362032 ssh_runner.go:195] Run: openssl version
	I1124 09:31:34.356609  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9243.pem && ln -fs /usr/share/ca-certificates/9243.pem /etc/ssl/certs/9243.pem"
	I1124 09:31:34.364761  362032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9243.pem
	I1124 09:31:34.368658  362032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:47 /usr/share/ca-certificates/9243.pem
	I1124 09:31:34.368700  362032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9243.pem
	I1124 09:31:34.402991  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9243.pem /etc/ssl/certs/51391683.0"
	I1124 09:31:34.411570  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92432.pem && ln -fs /usr/share/ca-certificates/92432.pem /etc/ssl/certs/92432.pem"
	I1124 09:31:34.420148  362032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92432.pem
	I1124 09:31:34.423823  362032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:47 /usr/share/ca-certificates/92432.pem
	I1124 09:31:34.423870  362032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92432.pem
	I1124 09:31:34.457560  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92432.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:31:34.465473  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:31:34.473738  362032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:31:34.477395  362032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:31:34.477450  362032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:31:34.511497  362032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:31:34.520078  362032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:31:34.523879  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:31:34.558318  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:31:34.592219  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:31:34.626535  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:31:34.673149  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:31:34.718656  362032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:31:34.773194  362032 kubeadm.go:401] StartCluster: {Name:embed-certs-673346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-673346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:31:34.773306  362032 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:31:34.773396  362032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:31:34.809346  362032 cri.go:89] found id: "67fcf06dcde21ab07e7a837bc9a473ee62285d8a2d6635a1c8bc2e493d856937"
	I1124 09:31:34.809372  362032 cri.go:89] found id: "fb5135f68391c3fbceb14011ee8bf52afa63ce6643003ae1dafca7af2f115dee"
	I1124 09:31:34.809379  362032 cri.go:89] found id: "e1b77741f09d01f0ef1a9962f313baae8e3d4c96dca7d3562b4df468b38f9f10"
	I1124 09:31:34.809384  362032 cri.go:89] found id: "242bac3e334febc8fbbf763dd3a50726f7606373615446f99d7c1109238cebd9"
	I1124 09:31:34.809388  362032 cri.go:89] found id: ""
	I1124 09:31:34.809436  362032 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:31:34.822307  362032 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:31:34Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:31:34.822444  362032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:31:34.830643  362032 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:31:34.830667  362032 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:31:34.830713  362032 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:31:34.838128  362032 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:31:34.838571  362032 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-673346" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:31:34.838682  362032 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5690/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-673346" cluster setting kubeconfig missing "embed-certs-673346" context setting]
	I1124 09:31:34.838938  362032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:31:34.840191  362032 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:31:34.847810  362032 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 09:31:34.847850  362032 kubeadm.go:602] duration metric: took 17.177062ms to restartPrimaryControlPlane
	I1124 09:31:34.847860  362032 kubeadm.go:403] duration metric: took 74.67789ms to StartCluster
	I1124 09:31:34.847881  362032 settings.go:142] acquiring lock: {Name:mkcd12f129bdb339015980f45cdbea1785158e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:31:34.847948  362032 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:31:34.848787  362032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5690/kubeconfig: {Name:mk33e51c69f97789d890bd1e5d2e8e11ce5b6c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:31:34.849011  362032 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:31:34.849086  362032 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:31:34.849180  362032 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-673346"
	I1124 09:31:34.849205  362032 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-673346"
	I1124 09:31:34.849205  362032 addons.go:70] Setting dashboard=true in profile "embed-certs-673346"
	I1124 09:31:34.849221  362032 addons.go:70] Setting default-storageclass=true in profile "embed-certs-673346"
	I1124 09:31:34.849229  362032 addons.go:239] Setting addon dashboard=true in "embed-certs-673346"
	W1124 09:31:34.849238  362032 addons.go:248] addon dashboard should already be in state true
	I1124 09:31:34.849241  362032 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-673346"
	W1124 09:31:34.849214  362032 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:31:34.849256  362032 config.go:182] Loaded profile config "embed-certs-673346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:31:34.849267  362032 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:31:34.849299  362032 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:31:34.849584  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:34.849742  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:34.849788  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:34.852027  362032 out.go:179] * Verifying Kubernetes components...
	I1124 09:31:34.853083  362032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:31:34.874425  362032 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:31:34.875566  362032 addons.go:239] Setting addon default-storageclass=true in "embed-certs-673346"
	W1124 09:31:34.875585  362032 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:31:34.875610  362032 host.go:66] Checking if "embed-certs-673346" exists ...
	I1124 09:31:34.875671  362032 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:31:34.876065  362032 cli_runner.go:164] Run: docker container inspect embed-certs-673346 --format={{.State.Status}}
	I1124 09:31:34.876750  362032 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:31:34.876859  362032 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:31:34.876873  362032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:31:34.876918  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:34.877848  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:31:34.877870  362032 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:31:34.877917  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:34.909023  362032 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:31:34.909045  362032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:31:34.909190  362032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-673346
	I1124 09:31:34.911241  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:34.915016  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:34.935783  362032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/embed-certs-673346/id_rsa Username:docker}
	I1124 09:31:34.995449  362032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:31:35.008305  362032 node_ready.go:35] waiting up to 6m0s for node "embed-certs-673346" to be "Ready" ...
	I1124 09:31:35.028770  362032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:31:35.029702  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:31:35.029721  362032 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:31:35.043437  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:31:35.043456  362032 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:31:35.054923  362032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:31:35.058079  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:31:35.058102  362032 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:31:35.071956  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:31:35.071987  362032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:31:35.088622  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:31:35.088649  362032 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:31:35.102267  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:31:35.102322  362032 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:31:35.114797  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:31:35.114818  362032 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:31:35.128174  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:31:35.128201  362032 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:31:35.140805  362032 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:31:35.140827  362032 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:31:35.153364  362032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:31:36.694819  362032 node_ready.go:49] node "embed-certs-673346" is "Ready"
	I1124 09:31:36.694857  362032 node_ready.go:38] duration metric: took 1.686439921s for node "embed-certs-673346" to be "Ready" ...
	I1124 09:31:36.694875  362032 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:31:36.694933  362032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:31:37.224682  362032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.195875666s)
	I1124 09:31:37.224742  362032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.169789571s)
	I1124 09:31:37.224864  362032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.071453546s)
	I1124 09:31:37.224887  362032 api_server.go:72] duration metric: took 2.375849138s to wait for apiserver process to appear ...
	I1124 09:31:37.224899  362032 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:31:37.224928  362032 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:31:37.232429  362032 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-673346 addons enable metrics-server
	
	I1124 09:31:37.233488  362032 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:31:37.233512  362032 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:31:37.255325  362032 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 09:31:37.259784  362032 addons.go:530] duration metric: took 2.410697418s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 09:31:37.725175  362032 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:31:37.729559  362032 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:31:37.729588  362032 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:31:38.225099  362032 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:31:38.229271  362032 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 09:31:38.230234  362032 api_server.go:141] control plane version: v1.34.2
	I1124 09:31:38.230254  362032 api_server.go:131] duration metric: took 1.005349195s to wait for apiserver health ...
	I1124 09:31:38.230271  362032 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:31:38.233863  362032 system_pods.go:59] 8 kube-system pods found
	I1124 09:31:38.233901  362032 system_pods.go:61] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:31:38.233910  362032 system_pods.go:61] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:31:38.233923  362032 system_pods.go:61] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:31:38.233933  362032 system_pods.go:61] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:31:38.233938  362032 system_pods.go:61] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:31:38.233946  362032 system_pods.go:61] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:31:38.233954  362032 system_pods.go:61] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:31:38.233961  362032 system_pods.go:61] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:31:38.233971  362032 system_pods.go:74] duration metric: took 3.692305ms to wait for pod list to return data ...
	I1124 09:31:38.233981  362032 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:31:38.236421  362032 default_sa.go:45] found service account: "default"
	I1124 09:31:38.236438  362032 default_sa.go:55] duration metric: took 2.450768ms for default service account to be created ...
	I1124 09:31:38.236446  362032 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:31:38.238821  362032 system_pods.go:86] 8 kube-system pods found
	I1124 09:31:38.238845  362032 system_pods.go:89] "coredns-66bc5c9577-vgl62" [a2f79272-9bc2-421a-8b98-02af7ee3ad09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:31:38.238854  362032 system_pods.go:89] "etcd-embed-certs-673346" [a59270af-ec4e-4a69-a8fc-d30d2b9c35ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:31:38.238863  362032 system_pods.go:89] "kindnet-zm85n" [8ed3aa09-03b5-4898-bcf8-d4a66a58c8d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:31:38.238876  362032 system_pods.go:89] "kube-apiserver-embed-certs-673346" [b4f0ab84-785e-4f4e-952b-0257417567ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:31:38.238883  362032 system_pods.go:89] "kube-controller-manager-embed-certs-673346" [e63b5a98-3845-45a7-84c0-65a15f3744f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:31:38.238932  362032 system_pods.go:89] "kube-proxy-m54gs" [280a5343-2e8e-4bfa-8589-49693afaef95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:31:38.238940  362032 system_pods.go:89] "kube-scheduler-embed-certs-673346" [cff572d9-f4ec-4933-92c3-14cf2af0310e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:31:38.238957  362032 system_pods.go:89] "storage-provisioner" [f54b959f-374a-4003-809e-9077f9384e37] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:31:38.238970  362032 system_pods.go:126] duration metric: took 2.518483ms to wait for k8s-apps to be running ...
	I1124 09:31:38.238983  362032 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:31:38.239038  362032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:31:38.264692  362032 system_svc.go:56] duration metric: took 25.701026ms WaitForService to wait for kubelet
	I1124 09:31:38.264717  362032 kubeadm.go:587] duration metric: took 3.41568124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:31:38.264734  362032 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:31:38.268707  362032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:31:38.268736  362032 node_conditions.go:123] node cpu capacity is 8
	I1124 09:31:38.268756  362032 node_conditions.go:105] duration metric: took 4.015342ms to run NodePressure ...
	I1124 09:31:38.268772  362032 start.go:242] waiting for startup goroutines ...
	I1124 09:31:38.268781  362032 start.go:247] waiting for cluster config update ...
	I1124 09:31:38.268795  362032 start.go:256] writing updated cluster config ...
	I1124 09:31:38.269094  362032 ssh_runner.go:195] Run: rm -f paused
	I1124 09:31:38.273880  362032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:31:38.277967  362032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vgl62" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:31:40.282794  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:42.284577  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:44.784144  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:47.283815  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:49.784049  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:52.282886  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:54.283843  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:56.784159  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:31:58.784300  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:01.284017  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:03.783463  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:06.283707  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:08.283939  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:10.785154  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:13.283485  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:15.284468  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	W1124 09:32:17.784236  362032 pod_ready.go:104] pod "coredns-66bc5c9577-vgl62" is not "Ready", error: <nil>
	I1124 09:32:18.282734  362032 pod_ready.go:94] pod "coredns-66bc5c9577-vgl62" is "Ready"
	I1124 09:32:18.282761  362032 pod_ready.go:86] duration metric: took 40.004767039s for pod "coredns-66bc5c9577-vgl62" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.285180  362032 pod_ready.go:83] waiting for pod "etcd-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.288776  362032 pod_ready.go:94] pod "etcd-embed-certs-673346" is "Ready"
	I1124 09:32:18.288798  362032 pod_ready.go:86] duration metric: took 3.591809ms for pod "etcd-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.290657  362032 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.294594  362032 pod_ready.go:94] pod "kube-apiserver-embed-certs-673346" is "Ready"
	I1124 09:32:18.294619  362032 pod_ready.go:86] duration metric: took 3.941216ms for pod "kube-apiserver-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.296507  362032 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.481375  362032 pod_ready.go:94] pod "kube-controller-manager-embed-certs-673346" is "Ready"
	I1124 09:32:18.481399  362032 pod_ready.go:86] duration metric: took 184.872787ms for pod "kube-controller-manager-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:18.681068  362032 pod_ready.go:83] waiting for pod "kube-proxy-m54gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:19.081542  362032 pod_ready.go:94] pod "kube-proxy-m54gs" is "Ready"
	I1124 09:32:19.081575  362032 pod_ready.go:86] duration metric: took 400.480986ms for pod "kube-proxy-m54gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:19.281722  362032 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:19.682036  362032 pod_ready.go:94] pod "kube-scheduler-embed-certs-673346" is "Ready"
	I1124 09:32:19.682062  362032 pod_ready.go:86] duration metric: took 400.314027ms for pod "kube-scheduler-embed-certs-673346" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:32:19.682075  362032 pod_ready.go:40] duration metric: took 41.408164915s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:32:19.728447  362032 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:32:19.730232  362032 out.go:179] * Done! kubectl is now configured to use "embed-certs-673346" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:31:48 embed-certs-673346 crio[571]: time="2025-11-24T09:31:48.07159028Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 09:31:48 embed-certs-673346 crio[571]: time="2025-11-24T09:31:48.074967253Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 09:31:48 embed-certs-673346 crio[571]: time="2025-11-24T09:31:48.07498804Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.224029666Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=74ae10fc-1be4-49a5-9b9d-992c6eb4774b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.226563078Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=646e9ba4-63a5-4110-a647-a76e805cfc77 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.229227438Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw/dashboard-metrics-scraper" id=0e544738-b013-4c41-a075-ad22737735d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.229381021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.235912498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.236424353Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.267583473Z" level=info msg="Created container a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw/dashboard-metrics-scraper" id=0e544738-b013-4c41-a075-ad22737735d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.26823357Z" level=info msg="Starting container: a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707" id=f4ade467-b81b-46ef-a816-a3038503e64c name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.270401242Z" level=info msg="Started container" PID=1783 containerID=a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw/dashboard-metrics-scraper id=f4ade467-b81b-46ef-a816-a3038503e64c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6c2a8ea045b8493ea799331a7bb1bba937b8fab2811108ea2863497c2f114be
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.321472142Z" level=info msg="Removing container: f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1" id=e54df80f-8660-43fb-99a8-bb710e7dae66 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:32:01 embed-certs-673346 crio[571]: time="2025-11-24T09:32:01.331477734Z" level=info msg="Removed container f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw/dashboard-metrics-scraper" id=e54df80f-8660-43fb-99a8-bb710e7dae66 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.341366548Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7c1dad13-aaa8-4fd3-adb0-a1a79351687b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.342375Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a88fd756-33b6-45e3-bf9a-044f02421117 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.343509706Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=868e455b-bc5c-4c02-ac46-4e773429baeb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.34364432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.348136531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.34835627Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8bb15c74a471d34c8ffe0e0f39239c964174bfe5d8dcd0d615ccc24edcbccf7e/merged/etc/passwd: no such file or directory"
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.348397275Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8bb15c74a471d34c8ffe0e0f39239c964174bfe5d8dcd0d615ccc24edcbccf7e/merged/etc/group: no such file or directory"
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.348756607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.378015121Z" level=info msg="Created container ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021: kube-system/storage-provisioner/storage-provisioner" id=868e455b-bc5c-4c02-ac46-4e773429baeb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.378678149Z" level=info msg="Starting container: ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021" id=deca52cf-61da-4965-9dff-c32eefdc5f6a name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:32:08 embed-certs-673346 crio[571]: time="2025-11-24T09:32:08.380403288Z" level=info msg="Started container" PID=1797 containerID=ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021 description=kube-system/storage-provisioner/storage-provisioner id=deca52cf-61da-4965-9dff-c32eefdc5f6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8916e67a1b84b24b26e9be8d8d5353bb92cad0e993a2338a55d342b2a56ad7b1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ed427bb796914       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           28 seconds ago       Running             storage-provisioner         1                   8916e67a1b84b       storage-provisioner                          kube-system
	a6b9c373cf698       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           35 seconds ago       Exited              dashboard-metrics-scraper   2                   d6c2a8ea045b8       dashboard-metrics-scraper-6ffb444bf9-gwkxw   kubernetes-dashboard
	c4f9f490192e6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   51 seconds ago       Running             kubernetes-dashboard        0                   65b245f5ad47f       kubernetes-dashboard-855c9754f9-sndp5        kubernetes-dashboard
	6083f7f446b89       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           59 seconds ago       Running             coredns                     0                   cc3b9c097a9ea       coredns-66bc5c9577-vgl62                     kube-system
	a46f7a30227ec       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           59 seconds ago       Running             busybox                     1                   6bbdfbee702f7       busybox                                      default
	78d705be98a49       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           59 seconds ago       Running             kindnet-cni                 0                   d536d787e1c1a       kindnet-zm85n                                kube-system
	88ef490a4f364       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           59 seconds ago       Running             kube-proxy                  0                   52f90f4b66945       kube-proxy-m54gs                             kube-system
	bb23121fe4c9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           59 seconds ago       Exited              storage-provisioner         0                   8916e67a1b84b       storage-provisioner                          kube-system
	67fcf06dcde21       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           About a minute ago   Running             kube-scheduler              0                   ec02092c8b542       kube-scheduler-embed-certs-673346            kube-system
	fb5135f68391c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           About a minute ago   Running             kube-apiserver              0                   699d288c8262c       kube-apiserver-embed-certs-673346            kube-system
	e1b77741f09d0       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   385b6156a1cb2       etcd-embed-certs-673346                      kube-system
	242bac3e334fe       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           About a minute ago   Running             kube-controller-manager     0                   bc8205ebdeb89       kube-controller-manager-embed-certs-673346   kube-system
	
	
	==> coredns [6083f7f446b89e57a45f62eebdd583ec8a99f8d488f9cd0c1b548305759138c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43560 - 30523 "HINFO IN 6920110998200931111.2568942224873600382. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036801363s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-673346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-673346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=embed-certs-673346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_30_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:30:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-673346
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:32:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:32:07 +0000   Mon, 24 Nov 2025 09:30:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:32:07 +0000   Mon, 24 Nov 2025 09:30:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:32:07 +0000   Mon, 24 Nov 2025 09:30:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:32:07 +0000   Mon, 24 Nov 2025 09:30:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-673346
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d639e906-b423-4ee2-aa7b-1de85e945d2c
	  Boot ID:                    96787f28-6250-4ff3-88ef-72259aa98461
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-vgl62                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-embed-certs-673346                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-zm85n                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-embed-certs-673346             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-embed-certs-673346    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-m54gs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-embed-certs-673346             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gwkxw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sndp5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 112s                 kube-proxy       
	  Normal  Starting                 59s                  kube-proxy       
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node embed-certs-673346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node embed-certs-673346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node embed-certs-673346 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node embed-certs-673346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node embed-certs-673346 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node embed-certs-673346 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           114s                 node-controller  Node embed-certs-673346 event: Registered Node embed-certs-673346 in Controller
	  Normal  NodeReady                102s                 kubelet          Node embed-certs-673346 status is now: NodeReady
	  Normal  Starting                 62s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node embed-certs-673346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node embed-certs-673346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node embed-certs-673346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                  node-controller  Node embed-certs-673346 event: Registered Node embed-certs-673346 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: de 9f e1 d5 a3 b5 d6 35 82 09 ec 90 08 00
	[Nov24 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 3f 8b de 9d e5 08 06
	[  +0.000895] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[ +12.285402] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 62 78 17 8f ef 08 06
	[  +0.000417] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 16 50 0f 2f e9 08 06
	[Nov24 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[ +13.006026] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 af d5 eb 8a c7 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 46 69 cd 6e f9 08 06
	[  +4.926385] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 ed d7 84 f1 47 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b6 87 66 28 b9 59 08 06
	[  +6.559857] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	[Nov24 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 96 28 6b eb 84 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 dd bb 2a 26 8f 08 06
	
	
	==> etcd [e1b77741f09d01f0ef1a9962f313baae8e3d4c96dca7d3562b4df468b38f9f10] <==
	{"level":"warn","ts":"2025-11-24T09:31:36.075538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.085363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.093004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.100328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.107848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.119505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.126919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.140854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.150436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.164177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.170475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.176753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.182794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.189564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.196796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.204158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.210433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.217970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.235772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.239168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.245651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.252060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:31:36.299987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46286","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T09:31:45.416917Z","caller":"traceutil/trace.go:172","msg":"trace[83210444] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"128.822645ms","start":"2025-11-24T09:31:45.288068Z","end":"2025-11-24T09:31:45.416891Z","steps":["trace[83210444] 'process raft request'  (duration: 109.747005ms)","trace[83210444] 'compare'  (duration: 18.952172ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:31:45.622182Z","caller":"traceutil/trace.go:172","msg":"trace[1037500426] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"108.141371ms","start":"2025-11-24T09:31:45.514017Z","end":"2025-11-24T09:31:45.622158Z","steps":["trace[1037500426] 'process raft request'  (duration: 87.931453ms)","trace[1037500426] 'compare'  (duration: 20.110816ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:32:36 up  1:15,  0 user,  load average: 1.19, 2.57, 2.11
	Linux embed-certs-673346 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78d705be98a49a5fe120b0a9c3141ff73fc6d07e2511cb87247a5748aca30a32] <==
	I1124 09:31:37.756773       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:31:37.757040       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 09:31:37.757234       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:31:37.757256       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:31:37.757284       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:31:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:31:37.983622       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:31:37.983654       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:31:37.983680       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:31:37.983811       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:31:38.483949       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:31:38.483988       1 metrics.go:72] Registering metrics
	I1124 09:31:38.484082       1 controller.go:711] "Syncing nftables rules"
	I1124 09:31:48.052127       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:31:48.052225       1 main.go:301] handling current node
	I1124 09:31:58.051596       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:31:58.051638       1 main.go:301] handling current node
	I1124 09:32:08.052017       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:32:08.052050       1 main.go:301] handling current node
	I1124 09:32:18.051488       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:32:18.051526       1 main.go:301] handling current node
	I1124 09:32:28.051705       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 09:32:28.051742       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fb5135f68391c3fbceb14011ee8bf52afa63ce6643003ae1dafca7af2f115dee] <==
	I1124 09:31:36.775544       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:31:36.775551       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:31:36.775685       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 09:31:36.775940       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 09:31:36.776807       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 09:31:36.776928       1 policy_source.go:240] refreshing policies
	I1124 09:31:36.778208       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 09:31:36.778261       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:31:36.778382       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 09:31:36.778445       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 09:31:36.778604       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 09:31:36.778854       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 09:31:36.782707       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 09:31:36.813393       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:31:37.047972       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:31:37.073663       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:31:37.091669       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:31:37.100623       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:31:37.106742       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:31:37.136730       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.247.146"}
	I1124 09:31:37.145679       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.13.54"}
	I1124 09:31:37.679285       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:31:40.217532       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:31:40.667281       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:31:40.816775       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [242bac3e334febc8fbbf763dd3a50726f7606373615446f99d7c1109238cebd9] <==
	I1124 09:31:40.181276       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 09:31:40.213765       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 09:31:40.213796       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 09:31:40.213825       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:31:40.213856       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 09:31:40.213941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:31:40.213959       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 09:31:40.213970       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 09:31:40.214207       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 09:31:40.214311       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:31:40.214311       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:31:40.214987       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 09:31:40.219003       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:31:40.219057       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:31:40.220814       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:31:40.223004       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 09:31:40.223139       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 09:31:40.223231       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-673346"
	I1124 09:31:40.223278       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 09:31:40.224371       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 09:31:40.226591       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:31:40.228837       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:31:40.230030       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:31:40.232375       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:31:40.236747       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [88ef490a4f3640428064e1abfbc1d73994e1ca39d582198f86ac9e96a86d0f27] <==
	I1124 09:31:37.616959       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:31:37.672015       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:31:37.772604       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:31:37.772635       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 09:31:37.772737       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:31:37.793586       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:31:37.793666       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:31:37.799489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:31:37.799841       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:31:37.799878       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:31:37.803616       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:31:37.803638       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:31:37.803663       1 config.go:309] "Starting node config controller"
	I1124 09:31:37.803672       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:31:37.803685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:31:37.803707       1 config.go:200] "Starting service config controller"
	I1124 09:31:37.803715       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:31:37.803770       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:31:37.803787       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:31:37.903719       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:31:37.903782       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:31:37.903889       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [67fcf06dcde21ab07e7a837bc9a473ee62285d8a2d6635a1c8bc2e493d856937] <==
	I1124 09:31:35.285695       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:31:36.695726       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:31:36.695763       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:31:36.695776       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:31:36.695786       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:31:36.733590       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 09:31:36.733624       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:31:36.736511       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:31:36.736579       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:31:36.736898       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:31:36.737017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:31:36.837189       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:31:40 embed-certs-673346 kubelet[733]: I1124 09:31:40.849321     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nvsz\" (UniqueName: \"kubernetes.io/projected/64591962-d4dc-4736-bf59-225893e09447-kube-api-access-7nvsz\") pod \"kubernetes-dashboard-855c9754f9-sndp5\" (UID: \"64591962-d4dc-4736-bf59-225893e09447\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sndp5"
	Nov 24 09:31:40 embed-certs-673346 kubelet[733]: I1124 09:31:40.849365     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e96ecf35-4937-4756-b450-f6c47f80fea3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gwkxw\" (UID: \"e96ecf35-4937-4756-b450-f6c47f80fea3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw"
	Nov 24 09:31:40 embed-certs-673346 kubelet[733]: I1124 09:31:40.849447     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxd2w\" (UniqueName: \"kubernetes.io/projected/e96ecf35-4937-4756-b450-f6c47f80fea3-kube-api-access-rxd2w\") pod \"dashboard-metrics-scraper-6ffb444bf9-gwkxw\" (UID: \"e96ecf35-4937-4756-b450-f6c47f80fea3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw"
	Nov 24 09:31:43 embed-certs-673346 kubelet[733]: I1124 09:31:43.269879     733 scope.go:117] "RemoveContainer" containerID="108f881ac432719fbca8dee367316360d21e77d8aa710164c46717dc916ebbf1"
	Nov 24 09:31:44 embed-certs-673346 kubelet[733]: I1124 09:31:44.274677     733 scope.go:117] "RemoveContainer" containerID="108f881ac432719fbca8dee367316360d21e77d8aa710164c46717dc916ebbf1"
	Nov 24 09:31:44 embed-certs-673346 kubelet[733]: I1124 09:31:44.274852     733 scope.go:117] "RemoveContainer" containerID="f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1"
	Nov 24 09:31:44 embed-certs-673346 kubelet[733]: E1124 09:31:44.275084     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:31:45 embed-certs-673346 kubelet[733]: I1124 09:31:45.279390     733 scope.go:117] "RemoveContainer" containerID="f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1"
	Nov 24 09:31:45 embed-certs-673346 kubelet[733]: E1124 09:31:45.279641     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:31:46 embed-certs-673346 kubelet[733]: I1124 09:31:46.292185     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sndp5" podStartSLOduration=1.701885267 podStartE2EDuration="6.292162651s" podCreationTimestamp="2025-11-24 09:31:40 +0000 UTC" firstStartedPulling="2025-11-24 09:31:41.10984002 +0000 UTC m=+6.972994511" lastFinishedPulling="2025-11-24 09:31:45.700117401 +0000 UTC m=+11.563271895" observedRunningTime="2025-11-24 09:31:46.292085779 +0000 UTC m=+12.155240294" watchObservedRunningTime="2025-11-24 09:31:46.292162651 +0000 UTC m=+12.155317163"
	Nov 24 09:31:47 embed-certs-673346 kubelet[733]: I1124 09:31:47.748888     733 scope.go:117] "RemoveContainer" containerID="f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1"
	Nov 24 09:31:47 embed-certs-673346 kubelet[733]: E1124 09:31:47.749063     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:32:01 embed-certs-673346 kubelet[733]: I1124 09:32:01.223464     733 scope.go:117] "RemoveContainer" containerID="f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1"
	Nov 24 09:32:01 embed-certs-673346 kubelet[733]: I1124 09:32:01.320092     733 scope.go:117] "RemoveContainer" containerID="f38ce55ce1ef5aa5afcc95afbae68f5683561ec69b78922ca4e99b05ccc5c1a1"
	Nov 24 09:32:01 embed-certs-673346 kubelet[733]: I1124 09:32:01.320318     733 scope.go:117] "RemoveContainer" containerID="a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707"
	Nov 24 09:32:01 embed-certs-673346 kubelet[733]: E1124 09:32:01.320597     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:32:07 embed-certs-673346 kubelet[733]: I1124 09:32:07.749497     733 scope.go:117] "RemoveContainer" containerID="a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707"
	Nov 24 09:32:07 embed-certs-673346 kubelet[733]: E1124 09:32:07.749693     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:32:08 embed-certs-673346 kubelet[733]: I1124 09:32:08.340914     733 scope.go:117] "RemoveContainer" containerID="bb23121fe4c9ac7e8ad0be18907e60b9c5b2eb812d63d624d25da5b7dfb249ec"
	Nov 24 09:32:21 embed-certs-673346 kubelet[733]: I1124 09:32:21.224154     733 scope.go:117] "RemoveContainer" containerID="a6b9c373cf698869153f52d1f2e08a22def38fb421ec070238c931696d234707"
	Nov 24 09:32:21 embed-certs-673346 kubelet[733]: E1124 09:32:21.224415     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gwkxw_kubernetes-dashboard(e96ecf35-4937-4756-b450-f6c47f80fea3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gwkxw" podUID="e96ecf35-4937-4756-b450-f6c47f80fea3"
	Nov 24 09:32:32 embed-certs-673346 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 09:32:32 embed-certs-673346 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 09:32:32 embed-certs-673346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:32:32 embed-certs-673346 systemd[1]: kubelet.service: Consumed 1.781s CPU time.
	
	
	==> kubernetes-dashboard [c4f9f490192e6ccb95becea8d7ee298981dec29a5a19458ff560025392ebd167] <==
	2025/11/24 09:31:45 Using namespace: kubernetes-dashboard
	2025/11/24 09:31:45 Using in-cluster config to connect to apiserver
	2025/11/24 09:31:45 Using secret token for csrf signing
	2025/11/24 09:31:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 09:31:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 09:31:45 Successful initial request to the apiserver, version: v1.34.2
	2025/11/24 09:31:45 Generating JWE encryption key
	2025/11/24 09:31:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 09:31:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 09:31:45 Initializing JWE encryption key from synchronized object
	2025/11/24 09:31:45 Creating in-cluster Sidecar client
	2025/11/24 09:31:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:31:45 Serving insecurely on HTTP port: 9090
	2025/11/24 09:32:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 09:31:45 Starting overwatch
	
	
	==> storage-provisioner [bb23121fe4c9ac7e8ad0be18907e60b9c5b2eb812d63d624d25da5b7dfb249ec] <==
	I1124 09:31:37.590007       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:32:07.594507       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ed427bb796914fc5b6c3879df00655499f6664bfdbb0b69ccdeef53f1fe7b021] <==
	I1124 09:32:08.399396       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:32:08.399435       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:32:08.401442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:11.856906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:16.117888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:19.716826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:22.770792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:25.792820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:25.798643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:32:25.798797       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:32:25.798953       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-673346_ee548368-2cee-4e0f-8542-dd3cd4a958a1!
	I1124 09:32:25.798941       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24e163fb-f470-4eb3-b56c-97d0ebe5b8c9", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-673346_ee548368-2cee-4e0f-8542-dd3cd4a958a1 became leader
	W1124 09:32:25.800888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:25.804133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:32:25.899243       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-673346_ee548368-2cee-4e0f-8542-dd3cd4a958a1!
	W1124 09:32:27.807116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:27.812137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:29.815735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:29.819680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:31.822474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:31.827113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:33.832600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:33.837502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:35.840877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:35.846978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-673346 -n embed-certs-673346
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-673346 -n embed-certs-673346: exit status 2 (322.097941ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-673346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.84s)

                                                
                                    

Test pass (334/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.81
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 3.5
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.42
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0.43
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.4
30 TestBinaryMirror 0.8
31 TestOffline 58.4
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 128.54
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/serial/GCPAuth/FakeCredentials 8.42
57 TestAddons/StoppedEnableDisable 16.78
58 TestCertOptions 29.2
59 TestCertExpiration 215.85
61 TestForceSystemdFlag 29.47
62 TestForceSystemdEnv 43.03
67 TestErrorSpam/setup 21.97
68 TestErrorSpam/start 0.65
69 TestErrorSpam/status 0.94
70 TestErrorSpam/pause 6.62
71 TestErrorSpam/unpause 6.11
72 TestErrorSpam/stop 12.51
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 39.45
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.16
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.61
84 TestFunctional/serial/CacheCmd/cache/add_local 0.79
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 66.57
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.18
95 TestFunctional/serial/LogsFileCmd 1.2
96 TestFunctional/serial/InvalidService 3.88
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 4.74
100 TestFunctional/parallel/DryRun 0.36
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 1.02
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 24.47
110 TestFunctional/parallel/SSHCmd 0.66
111 TestFunctional/parallel/CpCmd 1.85
112 TestFunctional/parallel/MySQL 16.41
113 TestFunctional/parallel/FileSync 0.31
114 TestFunctional/parallel/CertSync 1.85
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
122 TestFunctional/parallel/License 0.26
124 TestFunctional/parallel/Version/short 0.07
125 TestFunctional/parallel/Version/components 0.51
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.76
131 TestFunctional/parallel/ImageCommands/Setup 0.39
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.26
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
154 TestFunctional/parallel/ProfileCmd/profile_list 0.43
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
156 TestFunctional/parallel/MountCmd/any-port 5.51
157 TestFunctional/parallel/MountCmd/specific-port 1.57
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.95
159 TestFunctional/parallel/ServiceCmd/List 1.7
160 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 44.03
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.81
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.62
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.73
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.58
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 40.03
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.19
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.22
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.5
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.43
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 4.7
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.42
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.18
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.01
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.18
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 23.3
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.54
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.63
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 14.52
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.29
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.9
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.55
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.26
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.42
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 6.71
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.71
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.87
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.5
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 9.19
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.24
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.66
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.33
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.53
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.72
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.7
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 157.72
266 TestMultiControlPlane/serial/DeployApp 4.41
267 TestMultiControlPlane/serial/PingHostFromPods 1.03
268 TestMultiControlPlane/serial/AddWorkerNode 23.85
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
271 TestMultiControlPlane/serial/CopyFile 17.47
272 TestMultiControlPlane/serial/StopSecondaryNode 19.78
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.78
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 119.35
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.65
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
279 TestMultiControlPlane/serial/StopCluster 47.72
280 TestMultiControlPlane/serial/RestartCluster 57.63
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
282 TestMultiControlPlane/serial/AddSecondaryNode 38.32
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
288 TestJSONOutput/start/Command 40.74
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 7.95
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.23
313 TestKicCustomNetwork/create_custom_network 32.08
314 TestKicCustomNetwork/use_default_bridge_network 25.2
315 TestKicExistingNetwork 24.6
316 TestKicCustomSubnet 27.96
317 TestKicStaticIP 27.12
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 50.13
322 TestMountStart/serial/StartWithMountFirst 4.94
323 TestMountStart/serial/VerifyMountFirst 0.28
324 TestMountStart/serial/StartWithMountSecond 7.81
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.66
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.24
329 TestMountStart/serial/RestartStopped 7.25
330 TestMountStart/serial/VerifyMountPostStop 0.28
333 TestMultiNode/serial/FreshStart2Nodes 95.17
334 TestMultiNode/serial/DeployApp2Nodes 3.96
335 TestMultiNode/serial/PingHostFrom2Pods 0.72
336 TestMultiNode/serial/AddNode 23.75
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.66
339 TestMultiNode/serial/CopyFile 9.97
340 TestMultiNode/serial/StopNode 2.26
341 TestMultiNode/serial/StartAfterStop 7.34
342 TestMultiNode/serial/RestartKeepsNodes 78.44
343 TestMultiNode/serial/DeleteNode 5.24
344 TestMultiNode/serial/StopMultiNode 30.3
345 TestMultiNode/serial/RestartMultiNode 52.67
346 TestMultiNode/serial/ValidateNameConflict 24.04
351 TestPreload 100.73
353 TestScheduledStopUnix 96.28
356 TestInsufficientStorage 11.7
357 TestRunningBinaryUpgrade 47.16
359 TestKubernetesUpgrade 320.78
360 TestMissingContainerUpgrade 76.58
362 TestStoppedBinaryUpgrade/Setup 0.44
363 TestPause/serial/Start 55.92
364 TestStoppedBinaryUpgrade/Upgrade 78.23
365 TestPause/serial/SecondStartNoReconfiguration 7.86
368 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
369 TestNoKubernetes/serial/StartWithK8s 29.43
370 TestStoppedBinaryUpgrade/MinikubeLogs 1.4
378 TestNetworkPlugins/group/false 4.15
389 TestNoKubernetes/serial/StartWithStopK8s 18.89
390 TestNoKubernetes/serial/Start 6.88
391 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
392 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
393 TestNoKubernetes/serial/ProfileList 4.99
394 TestNoKubernetes/serial/Stop 1.27
395 TestNoKubernetes/serial/StartNoArgs 6.54
396 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
397 TestNetworkPlugins/group/auto/Start 43.28
398 TestNetworkPlugins/group/kindnet/Start 45.33
399 TestNetworkPlugins/group/auto/KubeletFlags 0.36
400 TestNetworkPlugins/group/auto/NetCatPod 10.22
401 TestNetworkPlugins/group/auto/DNS 0.11
402 TestNetworkPlugins/group/auto/Localhost 0.1
403 TestNetworkPlugins/group/auto/HairPin 0.09
404 TestNetworkPlugins/group/calico/Start 45.19
405 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
406 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
407 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
408 TestNetworkPlugins/group/kindnet/DNS 0.11
409 TestNetworkPlugins/group/kindnet/Localhost 0.09
410 TestNetworkPlugins/group/kindnet/HairPin 0.09
411 TestNetworkPlugins/group/custom-flannel/Start 48.95
412 TestNetworkPlugins/group/calico/ControllerPod 6.01
413 TestNetworkPlugins/group/calico/KubeletFlags 0.3
414 TestNetworkPlugins/group/calico/NetCatPod 8.18
415 TestNetworkPlugins/group/enable-default-cni/Start 71.21
416 TestNetworkPlugins/group/calico/DNS 0.13
417 TestNetworkPlugins/group/calico/Localhost 0.1
418 TestNetworkPlugins/group/calico/HairPin 0.1
419 TestNetworkPlugins/group/flannel/Start 47.29
420 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
421 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.19
422 TestNetworkPlugins/group/custom-flannel/DNS 0.13
423 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
424 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
425 TestNetworkPlugins/group/bridge/Start 68.27
426 TestNetworkPlugins/group/flannel/ControllerPod 6.01
427 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
428 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.22
429 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
430 TestNetworkPlugins/group/flannel/NetCatPod 8.18
431 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
432 TestNetworkPlugins/group/enable-default-cni/Localhost 0.08
433 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
434 TestNetworkPlugins/group/flannel/DNS 0.11
435 TestNetworkPlugins/group/flannel/Localhost 0.1
436 TestNetworkPlugins/group/flannel/HairPin 0.09
438 TestStartStop/group/old-k8s-version/serial/FirstStart 46.52
440 TestStartStop/group/no-preload/serial/FirstStart 47.81
441 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
442 TestNetworkPlugins/group/bridge/NetCatPod 8.22
443 TestNetworkPlugins/group/bridge/DNS 0.15
444 TestNetworkPlugins/group/bridge/Localhost 0.11
445 TestNetworkPlugins/group/bridge/HairPin 0.1
446 TestStartStop/group/old-k8s-version/serial/DeployApp 9.28
447 TestStartStop/group/no-preload/serial/DeployApp 8.23
449 TestStartStop/group/old-k8s-version/serial/Stop 17.72
451 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.97
453 TestStartStop/group/no-preload/serial/Stop 16.84
454 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
455 TestStartStop/group/old-k8s-version/serial/SecondStart 44.06
456 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.33
457 TestStartStop/group/no-preload/serial/SecondStart 46.67
459 TestStartStop/group/newest-cni/serial/FirstStart 33.26
460 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.3
462 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.25
463 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
464 TestStartStop/group/newest-cni/serial/DeployApp 0
466 TestStartStop/group/newest-cni/serial/Stop 2.67
467 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
468 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
469 TestStartStop/group/newest-cni/serial/SecondStart 11.25
470 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
471 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
473 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
474 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.77
475 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
476 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
477 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
478 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.82
481 TestStartStop/group/embed-certs/serial/FirstStart 43.84
482 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.86
484 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
485 TestStartStop/group/embed-certs/serial/DeployApp 8.22
486 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
487 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.73
490 TestStartStop/group/embed-certs/serial/Stop 18.16
491 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
492 TestStartStop/group/embed-certs/serial/SecondStart 53.28
493 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
494 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
495 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.68
x
+
TestDownloadOnly/v1.28.0/json-events (4.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-092707 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-092707 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.809400914s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 08:28:38.784963    9243 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1124 08:28:38.785044    9243 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-092707
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-092707: exit status 85 (67.919742ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-092707 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-092707 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:28:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:28:34.027925    9255 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:28:34.028174    9255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:34.028184    9255 out.go:374] Setting ErrFile to fd 2...
	I1124 08:28:34.028188    9255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:34.028401    9255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	W1124 08:28:34.028523    9255 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21978-5690/.minikube/config/config.json: open /home/jenkins/minikube-integration/21978-5690/.minikube/config/config.json: no such file or directory
	I1124 08:28:34.028967    9255 out.go:368] Setting JSON to true
	I1124 08:28:34.029912    9255 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":660,"bootTime":1763972254,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:28:34.029968    9255 start.go:143] virtualization: kvm guest
	I1124 08:28:34.034274    9255 out.go:99] [download-only-092707] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1124 08:28:34.034416    9255 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 08:28:34.034466    9255 notify.go:221] Checking for updates...
	I1124 08:28:34.035737    9255 out.go:171] MINIKUBE_LOCATION=21978
	I1124 08:28:34.037166    9255 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:28:34.038301    9255 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:28:34.039337    9255 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:28:34.040511    9255 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 08:28:34.042749    9255 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 08:28:34.042953    9255 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:28:34.067438    9255 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:28:34.067512    9255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:28:34.455828    9255 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-24 08:28:34.4453631 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:28:34.455934    9255 docker.go:319] overlay module found
	I1124 08:28:34.457513    9255 out.go:99] Using the docker driver based on user configuration
	I1124 08:28:34.457542    9255 start.go:309] selected driver: docker
	I1124 08:28:34.457551    9255 start.go:927] validating driver "docker" against <nil>
	I1124 08:28:34.457630    9255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:28:34.515228    9255 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-24 08:28:34.506047018 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:28:34.515400    9255 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:28:34.515880    9255 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 08:28:34.516030    9255 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 08:28:34.517467    9255 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-092707 host does not exist
	  To start a cluster, run: "minikube start -p download-only-092707"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-092707
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-290395 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-290395 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.499489479s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1124 08:28:42.707219    9243 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1124 08:28:42.707266    9243 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-290395
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-290395: exit status 85 (70.821391ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-092707 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-092707 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-092707                                                                                                                                                   │ download-only-092707 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-290395 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-290395 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:28:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:28:39.256227    9611 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:28:39.256343    9611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:39.256350    9611 out.go:374] Setting ErrFile to fd 2...
	I1124 08:28:39.256356    9611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:39.256547    9611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:28:39.256999    9611 out.go:368] Setting JSON to true
	I1124 08:28:39.257778    9611 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":665,"bootTime":1763972254,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:28:39.257857    9611 start.go:143] virtualization: kvm guest
	I1124 08:28:39.259948    9611 out.go:99] [download-only-290395] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:28:39.260121    9611 notify.go:221] Checking for updates...
	I1124 08:28:39.261387    9611 out.go:171] MINIKUBE_LOCATION=21978
	I1124 08:28:39.262773    9611 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:28:39.264033    9611 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:28:39.265310    9611 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:28:39.266552    9611 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 08:28:39.268749    9611 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 08:28:39.269014    9611 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:28:39.292655    9611 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:28:39.292731    9611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:28:39.346325    9611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-24 08:28:39.337580738 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:28:39.346453    9611 docker.go:319] overlay module found
	I1124 08:28:39.347867    9611 out.go:99] Using the docker driver based on user configuration
	I1124 08:28:39.347891    9611 start.go:309] selected driver: docker
	I1124 08:28:39.347899    9611 start.go:927] validating driver "docker" against <nil>
	I1124 08:28:39.347973    9611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:28:39.404579    9611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-24 08:28:39.393795748 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:28:39.404718    9611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:28:39.405149    9611 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 08:28:39.405281    9611 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 08:28:39.406915    9611 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-290395 host does not exist
	  To start a cluster, run: "minikube start -p download-only-290395"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-290395
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-029472 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-029472 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.423921447s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
I1124 08:28:46.622802    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 08:28:46.765419    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 08:28:46.911671    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-029472
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-029472: exit status 85 (71.876511ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-092707 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-092707 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-092707                                                                                                                                                          │ download-only-092707 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-290395 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-290395 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-290395                                                                                                                                                          │ download-only-290395 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │ 24 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-029472 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-029472 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:28:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:28:43.185393    9971 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:28:43.185492    9971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:43.185503    9971 out.go:374] Setting ErrFile to fd 2...
	I1124 08:28:43.185510    9971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:43.185717    9971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:28:43.186145    9971 out.go:368] Setting JSON to true
	I1124 08:28:43.186987    9971 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":669,"bootTime":1763972254,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:28:43.187040    9971 start.go:143] virtualization: kvm guest
	I1124 08:28:43.188764    9971 out.go:99] [download-only-029472] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:28:43.188884    9971 notify.go:221] Checking for updates...
	I1124 08:28:43.190138    9971 out.go:171] MINIKUBE_LOCATION=21978
	I1124 08:28:43.191355    9971 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:28:43.192601    9971 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:28:43.196828    9971 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:28:43.198148    9971 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 08:28:43.200388    9971 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 08:28:43.200623    9971 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:28:43.223157    9971 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:28:43.223222    9971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:28:43.282265    9971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-24 08:28:43.273327665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:28:43.282400    9971 docker.go:319] overlay module found
	I1124 08:28:43.283969    9971 out.go:99] Using the docker driver based on user configuration
	I1124 08:28:43.283994    9971 start.go:309] selected driver: docker
	I1124 08:28:43.283999    9971 start.go:927] validating driver "docker" against <nil>
	I1124 08:28:43.284063    9971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:28:43.339562    9971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-24 08:28:43.329794951 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:28:43.339694    9971 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:28:43.340155    9971 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 08:28:43.340286    9971 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 08:28:43.341945    9971 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-029472 host does not exist
	  To start a cluster, run: "minikube start -p download-only-029472"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-029472
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-572456 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-572456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-572456
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 08:28:48.287902    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-438068 --alsologtostderr --binary-mirror http://127.0.0.1:38621 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-438068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-438068
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (58.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-330284 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-330284 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (55.885256492s)
helpers_test.go:175: Cleaning up "offline-crio-330284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-330284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-330284: (2.511309134s)
--- PASS: TestOffline (58.40s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-962100
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-962100: exit status 85 (62.180072ms)

                                                
                                                
-- stdout --
	* Profile "addons-962100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-962100"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-962100
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-962100: exit status 85 (61.329835ms)

                                                
                                                
-- stdout --
	* Profile "addons-962100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-962100"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (128.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-962100 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-962100 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.539433835s)
--- PASS: TestAddons/Setup (128.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-962100 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-962100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-962100 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-962100 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1c466647-19c8-4bd7-89da-2219f06ffc9a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1c466647-19c8-4bd7-89da-2219f06ffc9a] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003107455s
addons_test.go:694: (dbg) Run:  kubectl --context addons-962100 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-962100 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-962100 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-962100
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-962100: (16.497947914s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-962100
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-962100
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-962100
--- PASS: TestAddons/StoppedEnableDisable (16.78s)

                                                
                                    
x
+
TestCertOptions (29.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-501889 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-501889 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.182589614s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-501889 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-501889 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-501889 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-501889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-501889
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-501889: (5.176789876s)
--- PASS: TestCertOptions (29.20s)

                                                
                                    
x
+
TestCertExpiration (215.85s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-362724 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-362724 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.919917353s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-362724 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-362724 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.472766724s)
helpers_test.go:175: Cleaning up "cert-expiration-362724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-362724
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-362724: (2.459988367s)
--- PASS: TestCertExpiration (215.85s)

                                                
                                    
x
+
TestForceSystemdFlag (29.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-595035 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-595035 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.556777223s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-595035 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-595035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-595035
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-595035: (2.56630338s)
--- PASS: TestForceSystemdFlag (29.47s)

                                                
                                    
x
+
TestForceSystemdEnv (43.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-401542 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-401542 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.945937981s)
helpers_test.go:175: Cleaning up "force-systemd-env-401542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-401542
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-401542: (4.085041858s)
--- PASS: TestForceSystemdEnv (43.03s)

                                                
                                    
x
+
TestErrorSpam/setup (21.97s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-819749 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-819749 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-819749 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-819749 --driver=docker  --container-runtime=crio: (21.965671498s)
--- PASS: TestErrorSpam/setup (21.97s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (6.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 pause: exit status 80 (2.216186947s)

                                                
                                                
-- stdout --
	* Pausing node nospam-819749 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:34:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 pause: exit status 80 (2.191262883s)

                                                
                                                
-- stdout --
	* Pausing node nospam-819749 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:34:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 pause: exit status 80 (2.209096742s)

                                                
                                                
-- stdout --
	* Pausing node nospam-819749 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:34:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 unpause: exit status 80 (1.840070958s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-819749 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:34:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 unpause: exit status 80 (2.175879071s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-819749 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:34:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 unpause: exit status 80 (2.097575783s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-819749 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T08:34:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.11s)

                                                
                                    
x
+
TestErrorSpam/stop (12.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 stop: (12.304085146s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819749 --log_dir /tmp/nospam-819749 stop
--- PASS: TestErrorSpam/stop (12.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/test/nested/copy/9243/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683533 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-683533 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.448840282s)
--- PASS: TestFunctional/serial/StartWithProxy (39.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 08:35:33.545258    9243 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683533 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-683533 --alsologtostderr -v=8: (7.154892137s)
functional_test.go:678: soft start took 7.155608041s for "functional-683533" cluster.
I1124 08:35:40.700518    9243 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-683533 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-683533 /tmp/TestFunctionalserialCacheCmdcacheadd_local244449939/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 cache add minikube-local-cache-test:functional-683533
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 cache delete minikube-local-cache-test:functional-683533
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-683533
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.810421ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 kubectl -- --context functional-683533 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-683533 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (66.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683533 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 08:35:58.276216    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:35:58.282613    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:35:58.293999    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:35:58.315368    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:35:58.356776    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:35:58.438202    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:35:58.599732    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:35:58.921433    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:35:59.563539    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:00.845124    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:03.406491    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:08.527998    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:18.769628    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:39.251479    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-683533 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m6.566348113s)
functional_test.go:776: restart took 1m6.566485933s for "functional-683533" cluster.
I1124 08:36:53.104823    9243 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (66.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-683533 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-683533 logs: (1.183603374s)
--- PASS: TestFunctional/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 logs --file /tmp/TestFunctionalserialLogsFileCmd1376717470/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-683533 logs --file /tmp/TestFunctionalserialLogsFileCmd1376717470/001/logs.txt: (1.195213643s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-683533 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-683533
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-683533: exit status 115 (341.314972ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32163 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-683533 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 config get cpus: exit status 14 (80.507814ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 config get cpus: exit status 14 (81.876226ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-683533 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-683533 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 49501: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683533 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-683533 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (154.438218ms)

                                                
                                                
-- stdout --
	* [functional-683533] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:37:26.756046   48659 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:37:26.756324   48659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:26.756347   48659 out.go:374] Setting ErrFile to fd 2...
	I1124 08:37:26.756354   48659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:26.756570   48659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:37:26.757018   48659 out.go:368] Setting JSON to false
	I1124 08:37:26.758194   48659 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1193,"bootTime":1763972254,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:37:26.758253   48659 start.go:143] virtualization: kvm guest
	I1124 08:37:26.760039   48659 out.go:179] * [functional-683533] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:37:26.761291   48659 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:37:26.761286   48659 notify.go:221] Checking for updates...
	I1124 08:37:26.762374   48659 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:37:26.763942   48659 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:37:26.765106   48659 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:37:26.766257   48659 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:37:26.767372   48659 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:37:26.768968   48659 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:37:26.769787   48659 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:37:26.794089   48659 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:37:26.794168   48659 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:37:26.847548   48659 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:37:26.838515571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:37:26.847640   48659 docker.go:319] overlay module found
	I1124 08:37:26.849298   48659 out.go:179] * Using the docker driver based on existing profile
	I1124 08:37:26.850426   48659 start.go:309] selected driver: docker
	I1124 08:37:26.850439   48659 start.go:927] validating driver "docker" against &{Name:functional-683533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-683533 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:37:26.850533   48659 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:37:26.852266   48659 out.go:203] 
	W1124 08:37:26.853372   48659 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 08:37:26.854363   48659 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683533 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683533 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-683533 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (164.797624ms)

                                                
                                                
-- stdout --
	* [functional-683533] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:37:27.117391   48877 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:37:27.117656   48877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:27.117666   48877 out.go:374] Setting ErrFile to fd 2...
	I1124 08:37:27.117670   48877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:27.117940   48877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:37:27.118386   48877 out.go:368] Setting JSON to false
	I1124 08:37:27.119484   48877 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1193,"bootTime":1763972254,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:37:27.119536   48877 start.go:143] virtualization: kvm guest
	I1124 08:37:27.121083   48877 out.go:179] * [functional-683533] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 08:37:27.122144   48877 notify.go:221] Checking for updates...
	I1124 08:37:27.122172   48877 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:37:27.123241   48877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:37:27.124451   48877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:37:27.125606   48877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:37:27.126725   48877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:37:27.127798   48877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:37:27.129207   48877 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:37:27.129728   48877 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:37:27.153642   48877 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:37:27.153733   48877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:37:27.213884   48877 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:37:27.203315817 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:37:27.213996   48877 docker.go:319] overlay module found
	I1124 08:37:27.215495   48877 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 08:37:27.216498   48877 start.go:309] selected driver: docker
	I1124 08:37:27.216511   48877 start.go:927] validating driver "docker" against &{Name:functional-683533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-683533 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:37:27.216585   48877 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:37:27.218161   48877 out.go:203] 
	W1124 08:37:27.219302   48877 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 08:37:27.220753   48877 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [189b9b95-41b4-4745-bb99-f631d2471010] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003338115s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-683533 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-683533 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-683533 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-683533 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [53ed4443-b812-41c8-abe9-d0277e4ff173] Pending
helpers_test.go:352: "sp-pod" [53ed4443-b812-41c8-abe9-d0277e4ff173] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [53ed4443-b812-41c8-abe9-d0277e4ff173] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003574076s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-683533 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-683533 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-683533 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0306ff79-c1e1-4c16-8207-218f0b039fcb] Pending
helpers_test.go:352: "sp-pod" [0306ff79-c1e1-4c16-8207-218f0b039fcb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0306ff79-c1e1-4c16-8207-218f0b039fcb] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003051038s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-683533 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh -n functional-683533 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 cp functional-683533:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3494698104/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh -n functional-683533 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh -n functional-683533 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (16.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-683533 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-2w68h" [3cb3c506-d7a5-47ee-9a5a-228cdf1da389] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-2w68h" [3cb3c506-d7a5-47ee-9a5a-228cdf1da389] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 11.003808871s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-683533 exec mysql-5bb876957f-2w68h -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-683533 exec mysql-5bb876957f-2w68h -- mysql -ppassword -e "show databases;": exit status 1 (99.393301ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:37:12.886788    9243 retry.go:31] will retry after 746.253605ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-683533 exec mysql-5bb876957f-2w68h -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-683533 exec mysql-5bb876957f-2w68h -- mysql -ppassword -e "show databases;": exit status 1 (88.599281ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:37:13.722218    9243 retry.go:31] will retry after 2.07895926s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-683533 exec mysql-5bb876957f-2w68h -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-683533 exec mysql-5bb876957f-2w68h -- mysql -ppassword -e "show databases;": exit status 1 (87.373747ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:37:15.889143    9243 retry.go:31] will retry after 1.945151497s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-683533 exec mysql-5bb876957f-2w68h -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (16.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9243/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo cat /etc/test/nested/copy/9243/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9243.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo cat /etc/ssl/certs/9243.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9243.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo cat /usr/share/ca-certificates/9243.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/92432.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo cat /etc/ssl/certs/92432.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/92432.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo cat /usr/share/ca-certificates/92432.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-683533 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 ssh "sudo systemctl is-active docker": exit status 1 (318.920382ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 ssh "sudo systemctl is-active containerd": exit status 1 (307.426472ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-683533 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-683533 image ls --format short --alsologtostderr:
I1124 08:37:29.244787   49943 out.go:360] Setting OutFile to fd 1 ...
I1124 08:37:29.244891   49943 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:29.244901   49943 out.go:374] Setting ErrFile to fd 2...
I1124 08:37:29.244907   49943 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:29.245239   49943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
I1124 08:37:29.246055   49943 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:37:29.246199   49943 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:37:29.246799   49943 cli_runner.go:164] Run: docker container inspect functional-683533 --format={{.State.Status}}
I1124 08:37:29.270625   49943 ssh_runner.go:195] Run: systemctl --version
I1124 08:37:29.270686   49943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-683533
I1124 08:37:29.291938   49943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-683533/id_rsa Username:docker}
I1124 08:37:29.401692   49943 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-683533 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-683533 image ls --format table --alsologtostderr:
I1124 08:37:32.027785   50329 out.go:360] Setting OutFile to fd 1 ...
I1124 08:37:32.028118   50329 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:32.028130   50329 out.go:374] Setting ErrFile to fd 2...
I1124 08:37:32.028136   50329 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:32.028447   50329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
I1124 08:37:32.029230   50329 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:37:32.029373   50329 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:37:32.029953   50329 cli_runner.go:164] Run: docker container inspect functional-683533 --format={{.State.Status}}
I1124 08:37:32.049073   50329 ssh_runner.go:195] Run: systemctl --version
I1124 08:37:32.049116   50329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-683533
I1124 08:37:32.065921   50329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-683533/id_rsa Username:docker}
I1124 08:37:32.165184   50329 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-683533 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["
gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pa
use:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io
/library/nginx:latest"],"size":"155491845"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d62
8db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee
7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:al
pine"],"size":"54252718"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-683533 image ls --format json --alsologtostderr:
I1124 08:37:32.252626   50393 out.go:360] Setting OutFile to fd 1 ...
I1124 08:37:32.252734   50393 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:32.252739   50393 out.go:374] Setting ErrFile to fd 2...
I1124 08:37:32.252743   50393 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:32.252961   50393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
I1124 08:37:32.253521   50393 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:37:32.253608   50393 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:37:32.253993   50393 cli_runner.go:164] Run: docker container inspect functional-683533 --format={{.State.Status}}
I1124 08:37:32.272259   50393 ssh_runner.go:195] Run: systemctl --version
I1124 08:37:32.272352   50393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-683533
I1124 08:37:32.289820   50393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-683533/id_rsa Username:docker}
I1124 08:37:32.389981   50393 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-683533 image ls --format yaml --alsologtostderr:
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-683533 image ls --format yaml --alsologtostderr:
I1124 08:37:29.520509   49996 out.go:360] Setting OutFile to fd 1 ...
I1124 08:37:29.520630   49996 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:29.520640   49996 out.go:374] Setting ErrFile to fd 2...
I1124 08:37:29.520647   49996 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:29.520985   49996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
I1124 08:37:29.521645   49996 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:37:29.521745   49996 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:37:29.522252   49996 cli_runner.go:164] Run: docker container inspect functional-683533 --format={{.State.Status}}
I1124 08:37:29.545970   49996 ssh_runner.go:195] Run: systemctl --version
I1124 08:37:29.546030   49996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-683533
I1124 08:37:29.567854   49996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-683533/id_rsa Username:docker}
I1124 08:37:29.676969   49996 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 ssh pgrep buildkitd: exit status 1 (318.344687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image build -t localhost/my-image:functional-683533 testdata/build --alsologtostderr
2025/11/24 08:37:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-683533 image build -t localhost/my-image:functional-683533 testdata/build --alsologtostderr: (3.193030322s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-683533 image build -t localhost/my-image:functional-683533 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5ed72ddc671
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-683533
--> 96c70f4efc9
Successfully tagged localhost/my-image:functional-683533
96c70f4efc9cbb85aa2b155b5efb6d1a91a6e94cb83f9b9ba2f9febe943d99d5
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-683533 image build -t localhost/my-image:functional-683533 testdata/build --alsologtostderr:
I1124 08:37:30.099069   50155 out.go:360] Setting OutFile to fd 1 ...
I1124 08:37:30.099438   50155 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:30.099451   50155 out.go:374] Setting ErrFile to fd 2...
I1124 08:37:30.099457   50155 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:30.099747   50155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
I1124 08:37:30.100415   50155 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:37:30.101001   50155 config.go:182] Loaded profile config "functional-683533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:37:30.101468   50155 cli_runner.go:164] Run: docker container inspect functional-683533 --format={{.State.Status}}
I1124 08:37:30.122945   50155 ssh_runner.go:195] Run: systemctl --version
I1124 08:37:30.123009   50155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-683533
I1124 08:37:30.145034   50155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-683533/id_rsa Username:docker}
I1124 08:37:30.254685   50155 build_images.go:162] Building image from path: /tmp/build.1687497912.tar
I1124 08:37:30.254751   50155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 08:37:30.265778   50155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1687497912.tar
I1124 08:37:30.270503   50155 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1687497912.tar: stat -c "%s %y" /var/lib/minikube/build/build.1687497912.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1687497912.tar': No such file or directory
I1124 08:37:30.270540   50155 ssh_runner.go:362] scp /tmp/build.1687497912.tar --> /var/lib/minikube/build/build.1687497912.tar (3072 bytes)
I1124 08:37:30.295454   50155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1687497912
I1124 08:37:30.305810   50155 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1687497912 -xf /var/lib/minikube/build/build.1687497912.tar
I1124 08:37:30.315793   50155 crio.go:315] Building image: /var/lib/minikube/build/build.1687497912
I1124 08:37:30.315855   50155 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-683533 /var/lib/minikube/build/build.1687497912 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1124 08:37:33.206058   50155 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-683533 /var/lib/minikube/build/build.1687497912 --cgroup-manager=cgroupfs: (2.890178052s)
I1124 08:37:33.206109   50155 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1687497912
I1124 08:37:33.213956   50155 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1687497912.tar
I1124 08:37:33.221384   50155 build_images.go:218] Built localhost/my-image:functional-683533 from /tmp/build.1687497912.tar
I1124 08:37:33.221418   50155 build_images.go:134] succeeded building to: functional-683533
I1124 08:37:33.221423   50155 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image ls
E1124 08:38:42.134817    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:40:58.268698    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:41:25.976745    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:45:58.268502    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-683533
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image rm kicbase/echo-server:functional-683533 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 image ls
I1124 08:37:07.636277    9243 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-683533 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-683533 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-683533 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 45055: os: process already finished
helpers_test.go:519: unable to terminate pid 44855: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-683533 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-683533 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-683533 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [fa1047dc-8713-4a4b-9126-5dec80d1fd5f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [fa1047dc-8713-4a4b-9126-5dec80d1fd5f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.017973072s
I1124 08:37:17.809651    9243 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-683533 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.25.98 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-683533 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
I1124 08:37:18.620741    9243 detect.go:223] nested VM detected
functional_test.go:1330: Took "363.918986ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "70.601951ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "373.778033ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.091907ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdany-port3253994994/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763973439061301812" to /tmp/TestFunctionalparallelMountCmdany-port3253994994/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763973439061301812" to /tmp/TestFunctionalparallelMountCmdany-port3253994994/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763973439061301812" to /tmp/TestFunctionalparallelMountCmdany-port3253994994/001/test-1763973439061301812
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.685499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:37:19.363370    9243 retry.go:31] will retry after 282.618476ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 08:37 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 08:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 08:37 test-1763973439061301812
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh cat /mount-9p/test-1763973439061301812
E1124 08:37:20.213504    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-683533 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [5b97bac2-24ce-4d1f-a0c0-ed95cb848453] Pending
helpers_test.go:352: "busybox-mount" [5b97bac2-24ce-4d1f-a0c0-ed95cb848453] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [5b97bac2-24ce-4d1f-a0c0-ed95cb848453] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [5b97bac2-24ce-4d1f-a0c0-ed95cb848453] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003234927s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-683533 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdany-port3253994994/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdspecific-port3382730610/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.107599ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:37:24.859684    9243 retry.go:31] will retry after 271.244072ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdspecific-port3382730610/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 ssh "sudo umount -f /mount-9p": exit status 1 (266.5512ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-683533 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdspecific-port3382730610/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2743143260/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2743143260/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2743143260/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T" /mount1: exit status 1 (349.925041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:37:26.496013    9243 retry.go:31] will retry after 671.97975ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-683533 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2743143260/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2743143260/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2743143260/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-683533 service list: (1.702517153s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-683533 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-683533 service list -o json: (1.703405723s)
functional_test.go:1504: Took "1.703488316s" to run "out/minikube-linux-amd64 -p functional-683533 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-683533
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-683533
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-683533
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21978-5690/.minikube/files/etc/test/nested/copy/9243/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (44.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504554 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-504554 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (44.032005426s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (44.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1124 08:48:09.452044    9243 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504554 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-504554 --alsologtostderr -v=8: (6.80576762s)
functional_test.go:678: soft start took 6.806101226s for "functional-504554" cluster.
I1124 08:48:16.258157    9243 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-504554 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3601200818/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 cache add minikube-local-cache-test:functional-504554
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 cache delete minikube-local-cache-test:functional-504554
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-504554
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.200902ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 kubectl -- --context functional-504554 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-504554 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (40.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504554 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-504554 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.026791138s)
functional_test.go:776: restart took 40.026899108s for "functional-504554" cluster.
I1124 08:49:02.088529    9243 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (40.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-504554 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-504554 logs: (1.188822191s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2227687899/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-504554 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2227687899/001/logs.txt: (1.218365153s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-504554 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-504554
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-504554: exit status 115 (340.94767ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30252 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-504554 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 config get cpus: exit status 14 (69.591801ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 config get cpus: exit status 14 (70.741881ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (4.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-504554 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-504554 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 66389: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (4.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504554 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-504554 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (186.203765ms)

                                                
                                                
-- stdout --
	* [functional-504554] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:49:09.478580   65615 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:49:09.478674   65615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:49:09.478682   65615 out.go:374] Setting ErrFile to fd 2...
	I1124 08:49:09.478686   65615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:49:09.478879   65615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:49:09.479266   65615 out.go:368] Setting JSON to false
	I1124 08:49:09.480118   65615 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1895,"bootTime":1763972254,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:49:09.480175   65615 start.go:143] virtualization: kvm guest
	I1124 08:49:09.482267   65615 out.go:179] * [functional-504554] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:49:09.483502   65615 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:49:09.483498   65615 notify.go:221] Checking for updates...
	I1124 08:49:09.486154   65615 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:49:09.487585   65615 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:49:09.491852   65615 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:49:09.493106   65615 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:49:09.494499   65615 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:49:09.496063   65615 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 08:49:09.496621   65615 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:49:09.524806   65615 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:49:09.524953   65615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:49:09.590650   65615 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:49:09.579905346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:49:09.590753   65615 docker.go:319] overlay module found
	I1124 08:49:09.593149   65615 out.go:179] * Using the docker driver based on existing profile
	I1124 08:49:09.594304   65615 start.go:309] selected driver: docker
	I1124 08:49:09.594358   65615 start.go:927] validating driver "docker" against &{Name:functional-504554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-504554 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:49:09.594435   65615 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:49:09.596148   65615 out.go:203] 
	W1124 08:49:09.597918   65615 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 08:49:09.599140   65615 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504554 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504554 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-504554 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (182.021663ms)

                                                
                                                
-- stdout --
	* [functional-504554] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:49:09.301850   65503 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:49:09.302168   65503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:49:09.302178   65503 out.go:374] Setting ErrFile to fd 2...
	I1124 08:49:09.302182   65503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:49:09.302521   65503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 08:49:09.302944   65503 out.go:368] Setting JSON to false
	I1124 08:49:09.303949   65503 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1895,"bootTime":1763972254,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:49:09.304025   65503 start.go:143] virtualization: kvm guest
	I1124 08:49:09.305953   65503 out.go:179] * [functional-504554] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 08:49:09.307580   65503 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:49:09.307649   65503 notify.go:221] Checking for updates...
	I1124 08:49:09.309878   65503 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:49:09.311425   65503 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 08:49:09.312672   65503 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 08:49:09.313751   65503 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:49:09.314988   65503 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:49:09.316726   65503 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 08:49:09.317166   65503 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:49:09.344385   65503 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:49:09.344523   65503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:49:09.403317   65503 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:49:09.393132227 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:49:09.403448   65503 docker.go:319] overlay module found
	I1124 08:49:09.405107   65503 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 08:49:09.406289   65503 start.go:309] selected driver: docker
	I1124 08:49:09.406302   65503 start.go:927] validating driver "docker" against &{Name:functional-504554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-504554 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:49:09.406419   65503 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:49:09.408273   65503 out.go:203] 
	W1124 08:49:09.409404   65503 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 08:49:09.410536   65503 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (23.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ef6d0bfb-011d-45f6-8b68-4bbd75f55251] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003347398s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-504554 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-504554 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-504554 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-504554 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8615c1ae-7104-4cb2-ac63-b67c992da9de] Pending
helpers_test.go:352: "sp-pod" [8615c1ae-7104-4cb2-ac63-b67c992da9de] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8615c1ae-7104-4cb2-ac63-b67c992da9de] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00315621s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-504554 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-504554 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-504554 apply -f testdata/storage-provisioner/pod.yaml
I1124 08:49:26.241843    9243 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [923b8cd8-526b-4fb5-b644-f83487a9b9f5] Pending
helpers_test.go:352: "sp-pod" [923b8cd8-526b-4fb5-b644-f83487a9b9f5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004291135s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-504554 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (23.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh -n functional-504554 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 cp functional-504554:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp270730191/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh -n functional-504554 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh -n functional-504554 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (14.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-504554 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-n54ch" [957b51a9-3b1f-4b21-8654-58c4f4288467] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-n54ch" [957b51a9-3b1f-4b21-8654-58c4f4288467] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 11.008514364s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-504554 exec mysql-844cf969f6-n54ch -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-504554 exec mysql-844cf969f6-n54ch -- mysql -ppassword -e "show databases;": exit status 1 (131.817987ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:49:39.291295    9243 retry.go:31] will retry after 1.073567604s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-504554 exec mysql-844cf969f6-n54ch -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-504554 exec mysql-844cf969f6-n54ch -- mysql -ppassword -e "show databases;": exit status 1 (85.623636ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:49:40.451284    9243 retry.go:31] will retry after 1.97300275s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-504554 exec mysql-844cf969f6-n54ch -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (14.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9243/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo cat /etc/test/nested/copy/9243/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9243.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo cat /etc/ssl/certs/9243.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9243.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo cat /usr/share/ca-certificates/9243.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/92432.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo cat /etc/ssl/certs/92432.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/92432.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo cat /usr/share/ca-certificates/92432.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-504554 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 ssh "sudo systemctl is-active docker": exit status 1 (272.352104ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 ssh "sudo systemctl is-active containerd": exit status 1 (275.78476ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "351.524885ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.038498ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
I1124 08:49:15.437746    9243 detect.go:223] nested VM detected
functional_test.go:1381: Took "346.123693ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.506812ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1617234062/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763974155770507340" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1617234062/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763974155770507340" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1617234062/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763974155770507340" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1617234062/001/test-1763974155770507340
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.083938ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:49:16.056902    9243 retry.go:31] will retry after 268.14765ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 08:49 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 08:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 08:49 test-1763974155770507340
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh cat /mount-9p/test-1763974155770507340
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-504554 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [61df013a-719b-4a2f-ac21-9ea20a5a56e7] Pending
helpers_test.go:352: "busybox-mount" [61df013a-719b-4a2f-ac21-9ea20a5a56e7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [61df013a-719b-4a2f-ac21-9ea20a5a56e7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [61df013a-719b-4a2f-ac21-9ea20a5a56e7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002736623s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-504554 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1617234062/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3295634454/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.625377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:49:22.766459    9243 retry.go:31] will retry after 393.431578ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3295634454/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 ssh "sudo umount -f /mount-9p": exit status 1 (277.086786ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-504554 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3295634454/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1499523100/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1499523100/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1499523100/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T" /mount1: exit status 1 (341.150394ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:49:24.537088    9243 retry.go:31] will retry after 624.645707ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-504554 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1499523100/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1499523100/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504554 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1499523100/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-504554 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-504554 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-504554 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-504554 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 72075: os: process already finished
helpers_test.go:519: unable to terminate pid 71890: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-504554 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-504554 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [1571724e-9800-4564-bb17-195eeaeb9a5f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [1571724e-9800-4564-bb17-195eeaeb9a5f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003538476s
I1124 08:49:45.195314    9243 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-504554 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/etcd:3.5.24-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-504554 image ls --format short --alsologtostderr:
I1124 08:49:47.062605   73888 out.go:360] Setting OutFile to fd 1 ...
I1124 08:49:47.062950   73888 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:49:47.062960   73888 out.go:374] Setting ErrFile to fd 2...
I1124 08:49:47.062967   73888 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:49:47.063181   73888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
I1124 08:49:47.063710   73888 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:49:47.063824   73888 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:49:47.064260   73888 cli_runner.go:164] Run: docker container inspect functional-504554 --format={{.State.Status}}
I1124 08:49:47.082192   73888 ssh_runner.go:195] Run: systemctl --version
I1124 08:49:47.082242   73888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-504554
I1124 08:49:47.100102   73888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-504554/id_rsa Username:docker}
I1124 08:49:47.202526   73888 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-504554 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.5.24-0           │ 8cb12dd0c3e42 │ 66.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 740kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-504554 image ls --format table --alsologtostderr:
I1124 08:49:47.538248   74147 out.go:360] Setting OutFile to fd 1 ...
I1124 08:49:47.538523   74147 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:49:47.538534   74147 out.go:374] Setting ErrFile to fd 2...
I1124 08:49:47.538537   74147 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:49:47.538766   74147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
I1124 08:49:47.539311   74147 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:49:47.539433   74147 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:49:47.539829   74147 cli_runner.go:164] Run: docker container inspect functional-504554 --format={{.State.Status}}
I1124 08:49:47.559522   74147 ssh_runner.go:195] Run: systemctl --version
I1124 08:49:47.559576   74147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-504554
I1124 08:49:47.579383   74147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-504554/id_rsa Username:docker}
I1124 08:49:47.680879   74147 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-504554 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31468661"},{"id":"8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d","repoDigests":["registry.k8s.io/etcd@sha256:2935cfa4bfce2fda1de6c218e1716ad170a9af6140906390d62cc3c2f4f542cd"],"repoTags":["registry.k8s.io/etcd:3.5.24-0"],"size":"66163668"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha25
6:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98
765c"],"repoTags":[],"size":"43824855"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79190589"},{"id":"aa9d028
39d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90816810"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76869776"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52744336"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","do
cker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efd
ece79639b998532f6"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71976228"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"739536"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-504554 image ls --format json --alsologtostderr:
I1124 08:49:47.303149   73995 out.go:360] Setting OutFile to fd 1 ...
I1124 08:49:47.303266   73995 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:49:47.303274   73995 out.go:374] Setting ErrFile to fd 2...
I1124 08:49:47.303280   73995 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:49:47.303542   73995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
I1124 08:49:47.304300   73995 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:49:47.304460   73995 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:49:47.305034   73995 cli_runner.go:164] Run: docker container inspect functional-504554 --format={{.State.Status}}
I1124 08:49:47.323972   73995 ssh_runner.go:195] Run: systemctl --version
I1124 08:49:47.324020   73995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-504554
I1124 08:49:47.343347   73995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-504554/id_rsa Username:docker}
I1124 08:49:47.448896   73995 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-504554 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79190589"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71976228"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76869776"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d
repoDigests:
- registry.k8s.io/etcd@sha256:2935cfa4bfce2fda1de6c218e1716ad170a9af6140906390d62cc3c2f4f542cd
repoTags:
- registry.k8s.io/etcd:3.5.24-0
size: "66163668"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90816810"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52744336"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-504554 image ls --format yaml --alsologtostderr:
I1124 08:49:47.059751   73889 out.go:360] Setting OutFile to fd 1 ...
I1124 08:49:47.059863   73889 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:49:47.059870   73889 out.go:374] Setting ErrFile to fd 2...
I1124 08:49:47.059875   73889 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:49:47.060548   73889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
I1124 08:49:47.061166   73889 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:49:47.061260   73889 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:49:47.061719   73889 cli_runner.go:164] Run: docker container inspect functional-504554 --format={{.State.Status}}
I1124 08:49:47.080672   73889 ssh_runner.go:195] Run: systemctl --version
I1124 08:49:47.080738   73889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-504554
I1124 08:49:47.098807   73889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-504554/id_rsa Username:docker}
I1124 08:49:47.202516   73889 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504554 ssh pgrep buildkitd: exit status 1 (282.838052ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image build -t localhost/my-image:functional-504554 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-504554 image build -t localhost/my-image:functional-504554 testdata/build --alsologtostderr: (2.148724274s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-504554 image build -t localhost/my-image:functional-504554 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9f086cb98b4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-504554
--> e2cfaeed1e1
Successfully tagged localhost/my-image:functional-504554
e2cfaeed1e1a668f1eebb997fcbb124720369d24d5b1e71722182af6c2cc7fe5
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-504554 image build -t localhost/my-image:functional-504554 testdata/build --alsologtostderr:
I1124 08:49:47.588927   74159 out.go:360] Setting OutFile to fd 1 ...
I1124 08:49:47.589175   74159 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:49:47.589185   74159 out.go:374] Setting ErrFile to fd 2...
I1124 08:49:47.589189   74159 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:49:47.589445   74159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
I1124 08:49:47.589963   74159 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:49:47.590662   74159 config.go:182] Loaded profile config "functional-504554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:49:47.591105   74159 cli_runner.go:164] Run: docker container inspect functional-504554 --format={{.State.Status}}
I1124 08:49:47.611479   74159 ssh_runner.go:195] Run: systemctl --version
I1124 08:49:47.611530   74159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-504554
I1124 08:49:47.629242   74159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/functional-504554/id_rsa Username:docker}
I1124 08:49:47.732125   74159 build_images.go:162] Building image from path: /tmp/build.4294583083.tar
I1124 08:49:47.732182   74159 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 08:49:47.739968   74159 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4294583083.tar
I1124 08:49:47.743533   74159 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4294583083.tar: stat -c "%s %y" /var/lib/minikube/build/build.4294583083.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4294583083.tar': No such file or directory
I1124 08:49:47.743567   74159 ssh_runner.go:362] scp /tmp/build.4294583083.tar --> /var/lib/minikube/build/build.4294583083.tar (3072 bytes)
I1124 08:49:47.760666   74159 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4294583083
I1124 08:49:47.768127   74159 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4294583083 -xf /var/lib/minikube/build/build.4294583083.tar
I1124 08:49:47.775688   74159 crio.go:315] Building image: /var/lib/minikube/build/build.4294583083
I1124 08:49:47.775767   74159 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-504554 /var/lib/minikube/build/build.4294583083 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1124 08:49:49.647744   74159 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-504554 /var/lib/minikube/build/build.4294583083 --cgroup-manager=cgroupfs: (1.871945711s)
I1124 08:49:49.647809   74159 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4294583083
I1124 08:49:49.656550   74159 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4294583083.tar
I1124 08:49:49.664014   74159 build_images.go:218] Built localhost/my-image:functional-504554 from /tmp/build.4294583083.tar
I1124 08:49:49.664046   74159 build_images.go:134] succeeded building to: functional-504554
I1124 08:49:49.664063   74159 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image ls
E1124 08:50:58.268640    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:51:59.595564    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:51:59.601915    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:51:59.613262    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:51:59.634595    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:51:59.676639    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:51:59.758063    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:51:59.919572    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:52:00.241321    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:52:00.882765    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:52:02.164368    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:52:04.726034    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:52:09.848037    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:52:20.089775    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:52:21.338730    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:52:40.571874    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:53:21.534093    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:54:43.456402    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:55:58.268644    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:56:59.595422    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:57:27.297978    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-504554
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-504554 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.129.255 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-504554 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image rm kicbase/echo-server:functional-504554 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-504554 service list: (1.723584981s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-504554 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-504554 service list -o json: (1.703493858s)
functional_test.go:1504: Took "1.703609575s" to run "out/minikube-linux-amd64 -p functional-504554 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-504554
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-504554
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-504554
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (157.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1124 09:00:58.268836    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m36.996706053s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (157.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- rollout status deployment/busybox
E1124 09:01:59.596494    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 kubectl -- rollout status deployment/busybox: (2.492983037s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-jlbn4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-q5mnb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-xtlvq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-jlbn4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-q5mnb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-xtlvq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-jlbn4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-q5mnb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-xtlvq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-jlbn4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-jlbn4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-q5mnb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-q5mnb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-xtlvq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 kubectl -- exec busybox-7b57f96db7-xtlvq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 node add --alsologtostderr -v 5: (22.955371474s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-643183 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp testdata/cp-test.txt ha-643183:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2715463147/001/cp-test_ha-643183.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183:/home/docker/cp-test.txt ha-643183-m02:/home/docker/cp-test_ha-643183_ha-643183-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m02 "sudo cat /home/docker/cp-test_ha-643183_ha-643183-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183:/home/docker/cp-test.txt ha-643183-m03:/home/docker/cp-test_ha-643183_ha-643183-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m03 "sudo cat /home/docker/cp-test_ha-643183_ha-643183-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183:/home/docker/cp-test.txt ha-643183-m04:/home/docker/cp-test_ha-643183_ha-643183-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m04 "sudo cat /home/docker/cp-test_ha-643183_ha-643183-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp testdata/cp-test.txt ha-643183-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2715463147/001/cp-test_ha-643183-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m02:/home/docker/cp-test.txt ha-643183:/home/docker/cp-test_ha-643183-m02_ha-643183.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183 "sudo cat /home/docker/cp-test_ha-643183-m02_ha-643183.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m02:/home/docker/cp-test.txt ha-643183-m03:/home/docker/cp-test_ha-643183-m02_ha-643183-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m03 "sudo cat /home/docker/cp-test_ha-643183-m02_ha-643183-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m02:/home/docker/cp-test.txt ha-643183-m04:/home/docker/cp-test_ha-643183-m02_ha-643183-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m04 "sudo cat /home/docker/cp-test_ha-643183-m02_ha-643183-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp testdata/cp-test.txt ha-643183-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2715463147/001/cp-test_ha-643183-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m03:/home/docker/cp-test.txt ha-643183:/home/docker/cp-test_ha-643183-m03_ha-643183.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183 "sudo cat /home/docker/cp-test_ha-643183-m03_ha-643183.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m03:/home/docker/cp-test.txt ha-643183-m02:/home/docker/cp-test_ha-643183-m03_ha-643183-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m02 "sudo cat /home/docker/cp-test_ha-643183-m03_ha-643183-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m03:/home/docker/cp-test.txt ha-643183-m04:/home/docker/cp-test_ha-643183-m03_ha-643183-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m04 "sudo cat /home/docker/cp-test_ha-643183-m03_ha-643183-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp testdata/cp-test.txt ha-643183-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2715463147/001/cp-test_ha-643183-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m04:/home/docker/cp-test.txt ha-643183:/home/docker/cp-test_ha-643183-m04_ha-643183.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183 "sudo cat /home/docker/cp-test_ha-643183-m04_ha-643183.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m04:/home/docker/cp-test.txt ha-643183-m02:/home/docker/cp-test_ha-643183-m04_ha-643183-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m02 "sudo cat /home/docker/cp-test_ha-643183-m04_ha-643183-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 cp ha-643183-m04:/home/docker/cp-test.txt ha-643183-m03:/home/docker/cp-test_ha-643183-m04_ha-643183-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 ssh -n ha-643183-m03 "sudo cat /home/docker/cp-test_ha-643183-m04_ha-643183-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 node stop m02 --alsologtostderr -v 5: (19.087016395s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-643183 status --alsologtostderr -v 5: exit status 7 (694.64526ms)

                                                
                                                
-- stdout --
	ha-643183
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-643183-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-643183-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-643183-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:03:06.222773   98606 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:03:06.223084   98606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:03:06.223094   98606 out.go:374] Setting ErrFile to fd 2...
	I1124 09:03:06.223099   98606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:03:06.223295   98606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:03:06.223515   98606 out.go:368] Setting JSON to false
	I1124 09:03:06.223539   98606 mustload.go:66] Loading cluster: ha-643183
	I1124 09:03:06.223668   98606 notify.go:221] Checking for updates...
	I1124 09:03:06.223867   98606 config.go:182] Loaded profile config "ha-643183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:03:06.223883   98606 status.go:174] checking status of ha-643183 ...
	I1124 09:03:06.224398   98606 cli_runner.go:164] Run: docker container inspect ha-643183 --format={{.State.Status}}
	I1124 09:03:06.243035   98606 status.go:371] ha-643183 host status = "Running" (err=<nil>)
	I1124 09:03:06.243057   98606 host.go:66] Checking if "ha-643183" exists ...
	I1124 09:03:06.243315   98606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-643183
	I1124 09:03:06.261387   98606 host.go:66] Checking if "ha-643183" exists ...
	I1124 09:03:06.261602   98606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:03:06.261644   98606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-643183
	I1124 09:03:06.278379   98606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/ha-643183/id_rsa Username:docker}
	I1124 09:03:06.377658   98606 ssh_runner.go:195] Run: systemctl --version
	I1124 09:03:06.383886   98606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:03:06.396007   98606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:03:06.453108   98606 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 09:03:06.443625678 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:03:06.453858   98606 kubeconfig.go:125] found "ha-643183" server: "https://192.168.49.254:8443"
	I1124 09:03:06.453898   98606 api_server.go:166] Checking apiserver status ...
	I1124 09:03:06.453948   98606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:03:06.465106   98606 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup
	W1124 09:03:06.473365   98606 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:03:06.473404   98606 ssh_runner.go:195] Run: ls
	I1124 09:03:06.477000   98606 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 09:03:06.481000   98606 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 09:03:06.481023   98606 status.go:463] ha-643183 apiserver status = Running (err=<nil>)
	I1124 09:03:06.481034   98606 status.go:176] ha-643183 status: &{Name:ha-643183 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:03:06.481060   98606 status.go:174] checking status of ha-643183-m02 ...
	I1124 09:03:06.481491   98606 cli_runner.go:164] Run: docker container inspect ha-643183-m02 --format={{.State.Status}}
	I1124 09:03:06.500474   98606 status.go:371] ha-643183-m02 host status = "Stopped" (err=<nil>)
	I1124 09:03:06.500496   98606 status.go:384] host is not running, skipping remaining checks
	I1124 09:03:06.500504   98606 status.go:176] ha-643183-m02 status: &{Name:ha-643183-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:03:06.500544   98606 status.go:174] checking status of ha-643183-m03 ...
	I1124 09:03:06.500786   98606 cli_runner.go:164] Run: docker container inspect ha-643183-m03 --format={{.State.Status}}
	I1124 09:03:06.517982   98606 status.go:371] ha-643183-m03 host status = "Running" (err=<nil>)
	I1124 09:03:06.518006   98606 host.go:66] Checking if "ha-643183-m03" exists ...
	I1124 09:03:06.518303   98606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-643183-m03
	I1124 09:03:06.536941   98606 host.go:66] Checking if "ha-643183-m03" exists ...
	I1124 09:03:06.537195   98606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:03:06.537230   98606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-643183-m03
	I1124 09:03:06.554185   98606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/ha-643183-m03/id_rsa Username:docker}
	I1124 09:03:06.652557   98606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:03:06.665020   98606 kubeconfig.go:125] found "ha-643183" server: "https://192.168.49.254:8443"
	I1124 09:03:06.665053   98606 api_server.go:166] Checking apiserver status ...
	I1124 09:03:06.665099   98606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:03:06.676126   98606 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1157/cgroup
	W1124 09:03:06.684710   98606 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1157/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:03:06.684768   98606 ssh_runner.go:195] Run: ls
	I1124 09:03:06.688386   98606 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 09:03:06.693166   98606 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 09:03:06.693199   98606 status.go:463] ha-643183-m03 apiserver status = Running (err=<nil>)
	I1124 09:03:06.693209   98606 status.go:176] ha-643183-m03 status: &{Name:ha-643183-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:03:06.693226   98606 status.go:174] checking status of ha-643183-m04 ...
	I1124 09:03:06.693522   98606 cli_runner.go:164] Run: docker container inspect ha-643183-m04 --format={{.State.Status}}
	I1124 09:03:06.710628   98606 status.go:371] ha-643183-m04 host status = "Running" (err=<nil>)
	I1124 09:03:06.710650   98606 host.go:66] Checking if "ha-643183-m04" exists ...
	I1124 09:03:06.710896   98606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-643183-m04
	I1124 09:03:06.728313   98606 host.go:66] Checking if "ha-643183-m04" exists ...
	I1124 09:03:06.728629   98606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:03:06.728689   98606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-643183-m04
	I1124 09:03:06.745949   98606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/ha-643183-m04/id_rsa Username:docker}
	I1124 09:03:06.846046   98606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:03:06.858612   98606 status.go:176] ha-643183-m04 status: &{Name:ha-643183-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 node start m02 --alsologtostderr -v 5: (7.839115607s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 stop --alsologtostderr -v 5: (51.442161536s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 start --wait true --alsologtostderr -v 5
E1124 09:04:09.058259    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:09.064629    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:09.076027    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:09.097383    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:09.138719    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:09.220200    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:09.381682    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:09.703463    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:10.345586    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:11.627209    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:14.188974    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:19.311160    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:29.553389    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:04:50.034989    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 start --wait true --alsologtostderr -v 5: (1m7.776965087s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 node delete m03 --alsologtostderr -v 5: (9.77230725s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (47.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 stop --alsologtostderr -v 5
E1124 09:05:30.997126    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:05:58.276496    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 stop --alsologtostderr -v 5: (47.601888991s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-643183 status --alsologtostderr -v 5: exit status 7 (113.680759ms)

                                                
                                                
-- stdout --
	ha-643183
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-643183-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-643183-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:06:15.623369  113070 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:06:15.623482  113070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:06:15.623493  113070 out.go:374] Setting ErrFile to fd 2...
	I1124 09:06:15.623497  113070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:06:15.623709  113070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:06:15.623915  113070 out.go:368] Setting JSON to false
	I1124 09:06:15.623940  113070 mustload.go:66] Loading cluster: ha-643183
	I1124 09:06:15.624051  113070 notify.go:221] Checking for updates...
	I1124 09:06:15.624426  113070 config.go:182] Loaded profile config "ha-643183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:06:15.624451  113070 status.go:174] checking status of ha-643183 ...
	I1124 09:06:15.624929  113070 cli_runner.go:164] Run: docker container inspect ha-643183 --format={{.State.Status}}
	I1124 09:06:15.643169  113070 status.go:371] ha-643183 host status = "Stopped" (err=<nil>)
	I1124 09:06:15.643189  113070 status.go:384] host is not running, skipping remaining checks
	I1124 09:06:15.643196  113070 status.go:176] ha-643183 status: &{Name:ha-643183 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:06:15.643224  113070 status.go:174] checking status of ha-643183-m02 ...
	I1124 09:06:15.643505  113070 cli_runner.go:164] Run: docker container inspect ha-643183-m02 --format={{.State.Status}}
	I1124 09:06:15.661950  113070 status.go:371] ha-643183-m02 host status = "Stopped" (err=<nil>)
	I1124 09:06:15.661995  113070 status.go:384] host is not running, skipping remaining checks
	I1124 09:06:15.662003  113070 status.go:176] ha-643183-m02 status: &{Name:ha-643183-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:06:15.662022  113070 status.go:174] checking status of ha-643183-m04 ...
	I1124 09:06:15.662276  113070 cli_runner.go:164] Run: docker container inspect ha-643183-m04 --format={{.State.Status}}
	I1124 09:06:15.679982  113070 status.go:371] ha-643183-m04 host status = "Stopped" (err=<nil>)
	I1124 09:06:15.680002  113070 status.go:384] host is not running, skipping remaining checks
	I1124 09:06:15.680008  113070 status.go:176] ha-643183-m04 status: &{Name:ha-643183-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (47.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1124 09:06:52.919514    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:06:59.595800    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (56.824305406s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-643183 node add --control-plane --alsologtostderr -v 5: (37.416991345s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-643183 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-142305 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1124 09:08:22.659588    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-142305 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.741079568s)
--- PASS: TestJSONOutput/start/Command (40.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-142305 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-142305 --output=json --user=testUser: (7.949493449s)
--- PASS: TestJSONOutput/stop/Command (7.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-400867 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-400867 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.958559ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5880e558-9cb8-4c27-960e-3a41d11d495b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-400867] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9cfb1465-e2b6-439b-9f84-11147b576754","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21978"}}
	{"specversion":"1.0","id":"8f54241e-15ef-4571-a06a-a974b9d917f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3868aa7a-1578-4861-abeb-32e83142e964","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig"}}
	{"specversion":"1.0","id":"3ab87b1c-f0a5-4e16-959e-da2b170ee4e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube"}}
	{"specversion":"1.0","id":"8dbca79f-f8a5-45c0-bbef-6d4177bc2aed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"34271761-c515-48d1-bfe2-26fc1252debd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f1b3c27f-15fd-4e97-abc3-921bfd31fa38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-400867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-400867
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-343116 --network=
E1124 09:09:01.341746    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:09:09.057955    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-343116 --network=: (29.891971178s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-343116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-343116
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-343116: (2.168762535s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-848750 --network=bridge
E1124 09:09:36.765760    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-848750 --network=bridge: (23.168662252s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-848750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-848750
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-848750: (2.012446341s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.20s)

                                                
                                    
x
+
TestKicExistingNetwork (24.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 09:09:56.233306    9243 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 09:09:56.250234    9243 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 09:09:56.250291    9243 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 09:09:56.250308    9243 cli_runner.go:164] Run: docker network inspect existing-network
W1124 09:09:56.266904    9243 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 09:09:56.266934    9243 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 09:09:56.266946    9243 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 09:09:56.267072    9243 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 09:09:56.283693    9243 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2543a3a5b30f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:09:61:f4:32:5e} reservation:<nil>}
I1124 09:09:56.284113    9243 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001af7060}
I1124 09:09:56.284141    9243 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 09:09:56.284198    9243 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 09:09:56.329927    9243 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-951271 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-951271 --network=existing-network: (22.475231486s)
helpers_test.go:175: Cleaning up "existing-network-951271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-951271
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-951271: (1.995040293s)
I1124 09:10:20.818365    9243 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.60s)

                                                
                                    
x
+
TestKicCustomSubnet (27.96s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-094530 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-094530 --subnet=192.168.60.0/24: (25.836402653s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-094530 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-094530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-094530
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-094530: (2.106421255s)
--- PASS: TestKicCustomSubnet (27.96s)

                                                
                                    
x
+
TestKicStaticIP (27.12s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-663018 --static-ip=192.168.200.200
E1124 09:10:58.275914    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-663018 --static-ip=192.168.200.200: (24.82975022s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-663018 ip
helpers_test.go:175: Cleaning up "static-ip-663018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-663018
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-663018: (2.139323837s)
--- PASS: TestKicStaticIP (27.12s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-667350 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-667350 --driver=docker  --container-runtime=crio: (21.641638074s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-670298 --driver=docker  --container-runtime=crio
E1124 09:11:59.595531    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-670298 --driver=docker  --container-runtime=crio: (22.559361137s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-667350
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-670298
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-670298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-670298
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-670298: (2.366482913s)
helpers_test.go:175: Cleaning up "first-667350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-667350
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-667350: (2.331188109s)
--- PASS: TestMinikubeProfile (50.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-092699 --memory=3072 --mount-string /tmp/TestMountStartserial2932861026/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-092699 --memory=3072 --mount-string /tmp/TestMountStartserial2932861026/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.935809356s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-092699 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-106484 --memory=3072 --mount-string /tmp/TestMountStartserial2932861026/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-106484 --memory=3072 --mount-string /tmp/TestMountStartserial2932861026/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.807900014s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-106484 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-092699 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-092699 --alsologtostderr -v=5: (1.664730523s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-106484 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-106484
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-106484: (1.242005829s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-106484
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-106484: (6.254153071s)
--- PASS: TestMountStart/serial/RestartStopped (7.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-106484 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178730 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178730 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m34.679356947s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- rollout status deployment/busybox
E1124 09:14:09.057494    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-178730 -- rollout status deployment/busybox: (2.578245851s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- exec busybox-7b57f96db7-7mr7x -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- exec busybox-7b57f96db7-ctmt9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- exec busybox-7b57f96db7-7mr7x -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- exec busybox-7b57f96db7-ctmt9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- exec busybox-7b57f96db7-7mr7x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- exec busybox-7b57f96db7-ctmt9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- exec busybox-7b57f96db7-7mr7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- exec busybox-7b57f96db7-7mr7x -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- exec busybox-7b57f96db7-ctmt9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178730 -- exec busybox-7b57f96db7-ctmt9 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-178730 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-178730 -v=5 --alsologtostderr: (23.104542834s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-178730 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp testdata/cp-test.txt multinode-178730:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp multinode-178730:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1089837462/001/cp-test_multinode-178730.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp multinode-178730:/home/docker/cp-test.txt multinode-178730-m02:/home/docker/cp-test_multinode-178730_multinode-178730-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m02 "sudo cat /home/docker/cp-test_multinode-178730_multinode-178730-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp multinode-178730:/home/docker/cp-test.txt multinode-178730-m03:/home/docker/cp-test_multinode-178730_multinode-178730-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m03 "sudo cat /home/docker/cp-test_multinode-178730_multinode-178730-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp testdata/cp-test.txt multinode-178730-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp multinode-178730-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1089837462/001/cp-test_multinode-178730-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp multinode-178730-m02:/home/docker/cp-test.txt multinode-178730:/home/docker/cp-test_multinode-178730-m02_multinode-178730.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730 "sudo cat /home/docker/cp-test_multinode-178730-m02_multinode-178730.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp multinode-178730-m02:/home/docker/cp-test.txt multinode-178730-m03:/home/docker/cp-test_multinode-178730-m02_multinode-178730-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m03 "sudo cat /home/docker/cp-test_multinode-178730-m02_multinode-178730-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp testdata/cp-test.txt multinode-178730-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp multinode-178730-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1089837462/001/cp-test_multinode-178730-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp multinode-178730-m03:/home/docker/cp-test.txt multinode-178730:/home/docker/cp-test_multinode-178730-m03_multinode-178730.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730 "sudo cat /home/docker/cp-test_multinode-178730-m03_multinode-178730.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 cp multinode-178730-m03:/home/docker/cp-test.txt multinode-178730-m02:/home/docker/cp-test_multinode-178730-m03_multinode-178730-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 ssh -n multinode-178730-m02 "sudo cat /home/docker/cp-test_multinode-178730-m03_multinode-178730-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-178730 node stop m03: (1.269109548s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178730 status: exit status 7 (494.139821ms)

                                                
                                                
-- stdout --
	multinode-178730
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-178730-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-178730-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178730 status --alsologtostderr: exit status 7 (493.385736ms)

                                                
                                                
-- stdout --
	multinode-178730
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-178730-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-178730-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:14:48.079757  173476 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:14:48.079990  173476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:14:48.079998  173476 out.go:374] Setting ErrFile to fd 2...
	I1124 09:14:48.080003  173476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:14:48.080219  173476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:14:48.080383  173476 out.go:368] Setting JSON to false
	I1124 09:14:48.080404  173476 mustload.go:66] Loading cluster: multinode-178730
	I1124 09:14:48.080522  173476 notify.go:221] Checking for updates...
	I1124 09:14:48.080736  173476 config.go:182] Loaded profile config "multinode-178730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:14:48.080753  173476 status.go:174] checking status of multinode-178730 ...
	I1124 09:14:48.081204  173476 cli_runner.go:164] Run: docker container inspect multinode-178730 --format={{.State.Status}}
	I1124 09:14:48.100656  173476 status.go:371] multinode-178730 host status = "Running" (err=<nil>)
	I1124 09:14:48.100750  173476 host.go:66] Checking if "multinode-178730" exists ...
	I1124 09:14:48.101086  173476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-178730
	I1124 09:14:48.117784  173476 host.go:66] Checking if "multinode-178730" exists ...
	I1124 09:14:48.118058  173476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:14:48.118119  173476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-178730
	I1124 09:14:48.135028  173476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/multinode-178730/id_rsa Username:docker}
	I1124 09:14:48.233525  173476 ssh_runner.go:195] Run: systemctl --version
	I1124 09:14:48.239479  173476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:14:48.251327  173476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:14:48.305574  173476 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-24 09:14:48.296066287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:14:48.306064  173476 kubeconfig.go:125] found "multinode-178730" server: "https://192.168.67.2:8443"
	I1124 09:14:48.306093  173476 api_server.go:166] Checking apiserver status ...
	I1124 09:14:48.306122  173476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:14:48.317577  173476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1255/cgroup
	W1124 09:14:48.325518  173476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1255/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:14:48.325576  173476 ssh_runner.go:195] Run: ls
	I1124 09:14:48.329266  173476 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 09:14:48.333282  173476 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 09:14:48.333300  173476 status.go:463] multinode-178730 apiserver status = Running (err=<nil>)
	I1124 09:14:48.333308  173476 status.go:176] multinode-178730 status: &{Name:multinode-178730 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:14:48.333324  173476 status.go:174] checking status of multinode-178730-m02 ...
	I1124 09:14:48.333562  173476 cli_runner.go:164] Run: docker container inspect multinode-178730-m02 --format={{.State.Status}}
	I1124 09:14:48.351426  173476 status.go:371] multinode-178730-m02 host status = "Running" (err=<nil>)
	I1124 09:14:48.351449  173476 host.go:66] Checking if "multinode-178730-m02" exists ...
	I1124 09:14:48.351682  173476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-178730-m02
	I1124 09:14:48.368810  173476 host.go:66] Checking if "multinode-178730-m02" exists ...
	I1124 09:14:48.369061  173476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:14:48.369101  173476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-178730-m02
	I1124 09:14:48.386103  173476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21978-5690/.minikube/machines/multinode-178730-m02/id_rsa Username:docker}
	I1124 09:14:48.483366  173476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:14:48.495264  173476 status.go:176] multinode-178730-m02 status: &{Name:multinode-178730-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:14:48.495299  173476 status.go:174] checking status of multinode-178730-m03 ...
	I1124 09:14:48.495572  173476 cli_runner.go:164] Run: docker container inspect multinode-178730-m03 --format={{.State.Status}}
	I1124 09:14:48.513288  173476 status.go:371] multinode-178730-m03 host status = "Stopped" (err=<nil>)
	I1124 09:14:48.513318  173476 status.go:384] host is not running, skipping remaining checks
	I1124 09:14:48.513327  173476 status.go:176] multinode-178730-m03 status: &{Name:multinode-178730-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-178730 node start m03 -v=5 --alsologtostderr: (6.633555008s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-178730
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-178730
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-178730: (29.462470096s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178730 --wait=true -v=5 --alsologtostderr
E1124 09:15:58.269503    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178730 --wait=true -v=5 --alsologtostderr: (48.855241304s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-178730
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-178730 node delete m03: (4.652277847s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-178730 stop: (30.103728229s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178730 status: exit status 7 (98.225712ms)

                                                
                                                
-- stdout --
	multinode-178730
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-178730-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178730 status --alsologtostderr: exit status 7 (94.447708ms)

                                                
                                                
-- stdout --
	multinode-178730
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-178730-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:16:49.791546  183453 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:16:49.791880  183453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:49.791890  183453 out.go:374] Setting ErrFile to fd 2...
	I1124 09:16:49.791956  183453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:49.792506  183453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:16:49.792854  183453 out.go:368] Setting JSON to false
	I1124 09:16:49.792876  183453 mustload.go:66] Loading cluster: multinode-178730
	I1124 09:16:49.792927  183453 notify.go:221] Checking for updates...
	I1124 09:16:49.793555  183453 config.go:182] Loaded profile config "multinode-178730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:49.793581  183453 status.go:174] checking status of multinode-178730 ...
	I1124 09:16:49.794067  183453 cli_runner.go:164] Run: docker container inspect multinode-178730 --format={{.State.Status}}
	I1124 09:16:49.813154  183453 status.go:371] multinode-178730 host status = "Stopped" (err=<nil>)
	I1124 09:16:49.813173  183453 status.go:384] host is not running, skipping remaining checks
	I1124 09:16:49.813179  183453 status.go:176] multinode-178730 status: &{Name:multinode-178730 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:16:49.813200  183453 status.go:174] checking status of multinode-178730-m02 ...
	I1124 09:16:49.813530  183453 cli_runner.go:164] Run: docker container inspect multinode-178730-m02 --format={{.State.Status}}
	I1124 09:16:49.830381  183453 status.go:371] multinode-178730-m02 host status = "Stopped" (err=<nil>)
	I1124 09:16:49.830402  183453 status.go:384] host is not running, skipping remaining checks
	I1124 09:16:49.830408  183453 status.go:176] multinode-178730-m02 status: &{Name:multinode-178730-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178730 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1124 09:16:59.596087    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178730 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.03489977s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178730 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-178730
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178730-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-178730-m02 --driver=docker  --container-runtime=crio: exit status 14 (80.143482ms)

                                                
                                                
-- stdout --
	* [multinode-178730-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-178730-m02' is duplicated with machine name 'multinode-178730-m02' in profile 'multinode-178730'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178730-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178730-m03 --driver=docker  --container-runtime=crio: (21.241665274s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-178730
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-178730: exit status 80 (300.26521ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-178730 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-178730-m03 already exists in multinode-178730-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-178730-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-178730-m03: (2.351304381s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.04s)

                                                
                                    
x
+
TestPreload (100.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-058777 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-058777 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (45.044490994s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-058777 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-058777 image pull gcr.io/k8s-minikube/busybox: (1.444500503s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-058777
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-058777: (5.871954751s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-058777 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1124 09:19:09.057562    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-058777 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (45.725566139s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-058777 image list
helpers_test.go:175: Cleaning up "test-preload-058777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-058777
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-058777: (2.409047792s)
--- PASS: TestPreload (100.73s)

                                                
                                    
x
+
TestScheduledStopUnix (96.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-310817 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-310817 --memory=3072 --driver=docker  --container-runtime=crio: (20.2713099s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-310817 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 09:20:11.768349  201106 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:20:11.768469  201106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:20:11.768480  201106 out.go:374] Setting ErrFile to fd 2...
	I1124 09:20:11.768486  201106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:20:11.768728  201106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:20:11.768957  201106 out.go:368] Setting JSON to false
	I1124 09:20:11.769044  201106 mustload.go:66] Loading cluster: scheduled-stop-310817
	I1124 09:20:11.769404  201106 config.go:182] Loaded profile config "scheduled-stop-310817": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:20:11.769498  201106 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/config.json ...
	I1124 09:20:11.769662  201106 mustload.go:66] Loading cluster: scheduled-stop-310817
	I1124 09:20:11.769757  201106 config.go:182] Loaded profile config "scheduled-stop-310817": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-310817 -n scheduled-stop-310817
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 09:20:12.148808  201255 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:20:12.149066  201255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:20:12.149075  201255 out.go:374] Setting ErrFile to fd 2...
	I1124 09:20:12.149079  201255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:20:12.149321  201255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:20:12.149606  201255 out.go:368] Setting JSON to false
	I1124 09:20:12.149877  201255 daemonize_unix.go:73] killing process 201141 as it is an old scheduled stop
	I1124 09:20:12.149985  201255 mustload.go:66] Loading cluster: scheduled-stop-310817
	I1124 09:20:12.150383  201255 config.go:182] Loaded profile config "scheduled-stop-310817": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:20:12.150477  201255 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/config.json ...
	I1124 09:20:12.150646  201255 mustload.go:66] Loading cluster: scheduled-stop-310817
	I1124 09:20:12.150737  201255 config.go:182] Loaded profile config "scheduled-stop-310817": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 09:20:12.156025    9243 retry.go:31] will retry after 71.848µs: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.157170    9243 retry.go:31] will retry after 193.291µs: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.158302    9243 retry.go:31] will retry after 169.333µs: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.159424    9243 retry.go:31] will retry after 268.968µs: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.160533    9243 retry.go:31] will retry after 553.917µs: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.161645    9243 retry.go:31] will retry after 860.016µs: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.162776    9243 retry.go:31] will retry after 1.686328ms: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.164965    9243 retry.go:31] will retry after 1.790101ms: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.167168    9243 retry.go:31] will retry after 3.523506ms: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.171398    9243 retry.go:31] will retry after 2.894252ms: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.174564    9243 retry.go:31] will retry after 6.598273ms: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.181757    9243 retry.go:31] will retry after 10.992443ms: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.192991    9243 retry.go:31] will retry after 6.895728ms: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.200208    9243 retry.go:31] will retry after 14.869201ms: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.215406    9243 retry.go:31] will retry after 22.205324ms: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
I1124 09:20:12.238643    9243 retry.go:31] will retry after 32.445001ms: open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-310817 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1124 09:20:32.127387    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-310817 -n scheduled-stop-310817
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-310817
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-310817 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 09:20:38.026412  201895 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:20:38.026661  201895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:20:38.026669  201895 out.go:374] Setting ErrFile to fd 2...
	I1124 09:20:38.026673  201895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:20:38.026850  201895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:20:38.027087  201895 out.go:368] Setting JSON to false
	I1124 09:20:38.027161  201895 mustload.go:66] Loading cluster: scheduled-stop-310817
	I1124 09:20:38.027470  201895 config.go:182] Loaded profile config "scheduled-stop-310817": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:20:38.027530  201895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/scheduled-stop-310817/config.json ...
	I1124 09:20:38.027710  201895 mustload.go:66] Loading cluster: scheduled-stop-310817
	I1124 09:20:38.027795  201895 config.go:182] Loaded profile config "scheduled-stop-310817": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1124 09:20:58.272183    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-310817
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-310817: exit status 7 (77.65963ms)

                                                
                                                
-- stdout --
	scheduled-stop-310817
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-310817 -n scheduled-stop-310817
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-310817 -n scheduled-stop-310817: exit status 7 (76.707281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-310817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-310817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-310817: (4.524271106s)
--- PASS: TestScheduledStopUnix (96.28s)

                                                
                                    
x
+
TestInsufficientStorage (11.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-761969 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-761969 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.244188371s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e2845a44-f815-4ef0-a51e-f9988ad1e35c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-761969] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"db795791-7219-4aac-a46e-e9604c141c12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21978"}}
	{"specversion":"1.0","id":"9dc1f845-dc9f-44aa-a5a2-c49e54f73959","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2bb43c1b-0698-42cc-87d0-3c1ca535973f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig"}}
	{"specversion":"1.0","id":"ec70898d-d86a-4124-94c1-cb2d9205b5a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube"}}
	{"specversion":"1.0","id":"fa42d544-b3f9-43a6-b7ea-34d918d319a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2ee496a0-c160-4ebf-9730-db1280a5603d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8f6e2877-ba9a-4ce6-b39e-940ac6fd9cb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8d35c90f-f411-4e0d-95a5-836da0b6890c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f9128115-9f5b-487d-90a6-e40f98c82bb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"96212055-a641-481a-89c1-df9bef4216ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7ca9e7aa-e327-4b8c-a5dc-52700e820e7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-761969\" primary control-plane node in \"insufficient-storage-761969\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f87fdb67-5a3a-4c99-9fa5-223ebd8c23f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c184f77-9780-4479-a77f-e052a03ea5b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6074173b-a7dd-4ac6-8775-02cac763a369","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-761969 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-761969 --output=json --layout=cluster: exit status 7 (290.808714ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-761969","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-761969","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 09:21:37.237033  204416 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-761969" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-761969 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-761969 --output=json --layout=cluster: exit status 7 (296.181423ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-761969","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-761969","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 09:21:37.533830  204527 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-761969" does not appear in /home/jenkins/minikube-integration/21978-5690/kubeconfig
	E1124 09:21:37.544164  204527 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/insufficient-storage-761969/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-761969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-761969
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-761969: (1.872233429s)
--- PASS: TestInsufficientStorage (11.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4151939628 start -p running-upgrade-065432 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4151939628 start -p running-upgrade-065432 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.831856727s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-065432 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-065432 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.474200553s)
helpers_test.go:175: Cleaning up "running-upgrade-065432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-065432
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-065432: (2.376136222s)
--- PASS: TestRunningBinaryUpgrade (47.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (320.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.134967261s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-967467
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-967467: (2.35506102s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-967467 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-967467 status --format={{.Host}}: exit status 7 (83.383348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m44.906559624s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-967467 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (76.490434ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-967467] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-967467
	    minikube start -p kubernetes-upgrade-967467 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9674672 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-967467 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-967467 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.183483806s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-967467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-967467
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-967467: (2.986299179s)
--- PASS: TestKubernetesUpgrade (320.78s)

                                                
                                    
x
+
TestMissingContainerUpgrade (76.58s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.878427147 start -p missing-upgrade-121748 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.878427147 start -p missing-upgrade-121748 --memory=3072 --driver=docker  --container-runtime=crio: (19.271446201s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-121748
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-121748: (10.453315477s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-121748
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-121748 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-121748 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.534065564s)
helpers_test.go:175: Cleaning up "missing-upgrade-121748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-121748
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-121748: (2.933448125s)
--- PASS: TestMissingContainerUpgrade (76.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestPause/serial/Start (55.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-374067 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-374067 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.922493934s)
--- PASS: TestPause/serial/Start (55.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (78.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2880120870 start -p stopped-upgrade-385309 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1124 09:21:59.595266    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2880120870 start -p stopped-upgrade-385309 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m0.346242998s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2880120870 -p stopped-upgrade-385309 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2880120870 -p stopped-upgrade-385309 stop: (2.696246231s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-385309 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-385309 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.190663162s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (78.23s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-374067 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-374067 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.832248112s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-010717 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-010717 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (91.642739ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-010717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-010717 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-010717 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.008172437s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-010717 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-385309
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-385309: (1.397840894s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-949664 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-949664 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (180.435431ms)

                                                
                                                
-- stdout --
	* [false-949664] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:23:05.855252  231924 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:23:05.855512  231924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:23:05.855521  231924 out.go:374] Setting ErrFile to fd 2...
	I1124 09:23:05.855525  231924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:23:05.855701  231924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5690/.minikube/bin
	I1124 09:23:05.856175  231924 out.go:368] Setting JSON to false
	I1124 09:23:05.857231  231924 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3932,"bootTime":1763972254,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:23:05.857284  231924 start.go:143] virtualization: kvm guest
	I1124 09:23:05.859220  231924 out.go:179] * [false-949664] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:23:05.860367  231924 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:23:05.860368  231924 notify.go:221] Checking for updates...
	I1124 09:23:05.862624  231924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:23:05.864204  231924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5690/kubeconfig
	I1124 09:23:05.865424  231924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5690/.minikube
	I1124 09:23:05.866661  231924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:23:05.867813  231924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:23:05.869402  231924 config.go:182] Loaded profile config "NoKubernetes-010717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:23:05.869487  231924 config.go:182] Loaded profile config "cert-expiration-362724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:23:05.869570  231924 config.go:182] Loaded profile config "cert-options-501889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:23:05.869672  231924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:23:05.897652  231924 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:23:05.897739  231924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:23:05.961953  231924 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 09:23:05.951611046 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:23:05.962057  231924 docker.go:319] overlay module found
	I1124 09:23:05.964508  231924 out.go:179] * Using the docker driver based on user configuration
	I1124 09:23:05.965713  231924 start.go:309] selected driver: docker
	I1124 09:23:05.965726  231924 start.go:927] validating driver "docker" against <nil>
	I1124 09:23:05.965737  231924 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:23:05.967673  231924 out.go:203] 
	W1124 09:23:05.968701  231924 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1124 09:23:05.969743  231924 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-949664 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-949664" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:23:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-362724
contexts:
- context:
cluster: cert-expiration-362724
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:23:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-362724
name: cert-expiration-362724
current-context: cert-expiration-362724
kind: Config
users:
- name: cert-expiration-362724
user:
client-certificate: /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/cert-expiration-362724/client.crt
client-key: /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/cert-expiration-362724/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-949664

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949664"

                                                
                                                
----------------------- debugLogs end: false-949664 [took: 3.753118367s] --------------------------------
helpers_test.go:175: Cleaning up "false-949664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-949664
--- PASS: TestNetworkPlugins/group/false (4.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-010717 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-010717 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.386757236s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-010717 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-010717 status -o json: exit status 2 (315.62444ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-010717","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-010717
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-010717: (2.182381976s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-010717 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-010717 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.879801494s)
--- PASS: TestNoKubernetes/serial/Start (6.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21978-5690/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-010717 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-010717 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.739798ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (4.292253814s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-010717
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-010717: (1.272031485s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-010717 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-010717 --driver=docker  --container-runtime=crio: (6.537927388s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-010717 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-010717 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.850782ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1124 09:24:09.057877    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.277742902s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.33154318s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-949664 "pgrep -a kubelet"
I1124 09:24:45.958103    9243 config.go:182] Loaded profile config "auto-949664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-949664 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x2tz2" [b2a6bfe9-9f99-49ad-ace9-1bf601796070] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x2tz2" [b2a6bfe9-9f99-49ad-ace9-1bf601796070] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003019859s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-949664 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (45.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (45.188729195s)
--- PASS: TestNetworkPlugins/group/calico/Start (45.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8x72x" [0291e0dd-e237-44f7-8ed8-cca280a9a5d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004263919s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-949664 "pgrep -a kubelet"
I1124 09:25:28.824165    9243 config.go:182] Loaded profile config "kindnet-949664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-949664 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fv94v" [9ea0ac9f-f209-4fa3-b33d-21e4856a1de9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fv94v" [9ea0ac9f-f209-4fa3-b33d-21e4856a1de9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003394275s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-949664 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (48.948718104s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mh2rr" [08a87a77-51f3-471e-b35c-5dd565627f62] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-mh2rr" [08a87a77-51f3-471e-b35c-5dd565627f62] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003384793s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-949664 "pgrep -a kubelet"
I1124 09:26:07.034150    9243 config.go:182] Loaded profile config "calico-949664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-949664 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nkcrs" [4756261c-b7ac-4767-99a8-679217fe78ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nkcrs" [4756261c-b7ac-4767-99a8-679217fe78ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.00427933s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m11.209530194s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-949664 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.285709196s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-949664 "pgrep -a kubelet"
I1124 09:26:48.435735    9243 config.go:182] Loaded profile config "custom-flannel-949664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-949664 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hrcvt" [d77b68b3-67ae-426a-ad86-8c1ead018f90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hrcvt" [d77b68b3-67ae-426a-ad86-8c1ead018f90] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.005066898s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-949664 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-949664 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m8.269494558s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-t9zwp" [15f62330-8146-48f4-a700-50e32f4954c6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00421764s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-949664 "pgrep -a kubelet"
I1124 09:27:25.189399    9243 config.go:182] Loaded profile config "enable-default-cni-949664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-949664 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mbpt6" [03e245ff-ae98-4134-b179-e593d010bd62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mbpt6" [03e245ff-ae98-4134-b179-e593d010bd62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004087067s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-949664 "pgrep -a kubelet"
I1124 09:27:30.152997    9243 config.go:182] Loaded profile config "flannel-949664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-949664 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6k46n" [dc31bcd5-92d3-442e-8f26-89b8b275610b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6k46n" [dc31bcd5-92d3-442e-8f26-89b8b275610b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004365178s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-949664 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-949664 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (46.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.518067295s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (46.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (47.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (47.805368841s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (47.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-949664 "pgrep -a kubelet"
I1124 09:28:26.149123    9243 config.go:182] Loaded profile config "bridge-949664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-949664 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4hzjx" [b8139ed6-8438-4657-991e-f0403ad5de42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4hzjx" [b8139ed6-8438-4657-991e-f0403ad5de42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004037694s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-949664 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-949664 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)
E1124 09:30:23.813383    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:30:25.095147    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-767267 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2e8d6e38-9822-430d-b775-977600e48262] Pending
helpers_test.go:352: "busybox" [2e8d6e38-9822-430d-b775-977600e48262] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2e8d6e38-9822-430d-b775-977600e48262] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003256536s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-767267 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-938348 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4f8a9222-5610-494c-8cd8-a464fdacd234] Pending
helpers_test.go:352: "busybox" [4f8a9222-5610-494c-8cd8-a464fdacd234] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4f8a9222-5610-494c-8cd8-a464fdacd234] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004129232s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-938348 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (17.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-767267 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-767267 --alsologtostderr -v=3: (17.718143812s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (17.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (42.973513363s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-938348 --alsologtostderr -v=3
E1124 09:29:09.058192    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-504554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-938348 --alsologtostderr -v=3: (16.834959882s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-767267 -n old-k8s-version-767267
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-767267 -n old-k8s-version-767267: exit status 7 (78.926706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-767267 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-767267 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (43.722312583s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-767267 -n old-k8s-version-767267
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-938348 -n no-preload-938348
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-938348 -n no-preload-938348: exit status 7 (122.431213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-938348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-938348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (46.348476789s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-938348 -n no-preload-938348
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (33.26218728s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-164377 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [405ec516-207e-443a-b038-ac6f6da6efb1] Pending
helpers_test.go:352: "busybox" [405ec516-207e-443a-b038-ac6f6da6efb1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [405ec516-207e-443a-b038-ac6f6da6efb1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004432121s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-164377 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-164377 --alsologtostderr -v=3
E1124 09:29:48.729881    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:29:51.291414    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-164377 --alsologtostderr -v=3: (18.250269537s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4mz29" [2140f28a-310d-48ca-ab87-329ddfaaf554] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003400053s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-639420 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-639420 --alsologtostderr -v=3: (2.672529504s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4mz29" [2140f28a-310d-48ca-ab87-329ddfaaf554] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004353977s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-767267 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639420 -n newest-cni-639420
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639420 -n newest-cni-639420: exit status 7 (79.622698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-639420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-639420 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (10.89042947s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639420 -n newest-cni-639420
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-stj4z" [0a509573-31ff-48cc-8479-7c2be43f7688] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004249696s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-767267 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377: exit status 7 (87.270549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-164377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1124 09:30:06.654489    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-164377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (47.448614407s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-164377 -n default-k8s-diff-port-164377
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-stj4z" [0a509573-31ff-48cc-8479-7c2be43f7688] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003156461s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-938348 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-639420 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
I1124 09:30:12.037049    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 09:30:12.209848    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 09:30:12.359568    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (43.835955773s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-938348 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
I1124 09:30:14.823891    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 09:30:15.009082    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2hcnx" [26b38e76-0b44-4ea1-87db-97ff20b2a167] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003729121s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-673346 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4646ee42-5d8b-47af-825e-b809a988472f] Pending
E1124 09:30:58.268526    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/addons-962100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [4646ee42-5d8b-47af-825e-b809a988472f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4646ee42-5d8b-47af-825e-b809a988472f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003353396s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-673346 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2hcnx" [26b38e76-0b44-4ea1-87db-97ff20b2a167] Running
E1124 09:31:00.727314    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:00.733714    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:00.745051    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:00.766436    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:00.807800    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:00.889235    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:01.050789    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:01.372668    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:02.014709    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:03.296087    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:03.502666    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:05.857762    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004209405s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-164377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-164377 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1124 09:31:06.499317    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-673346 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-673346 --alsologtostderr -v=3: (18.157471954s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-673346 -n embed-certs-673346
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-673346 -n embed-certs-673346: exit status 7 (76.678544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-673346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1124 09:31:41.703562    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:44.464506    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/kindnet-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:48.617128    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:48.623499    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:48.634873    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:48.656228    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:48.697611    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:48.779082    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:48.940613    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:49.262570    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:49.904477    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:51.185821    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:53.747713    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:58.869711    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:59.595475    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/functional-683533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:09.111484    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-673346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (52.955360718s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-673346 -n embed-certs-673346
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sndp5" [64591962-d4dc-4736-bf59-225893e09447] Running
E1124 09:32:22.664821    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/calico-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:23.852100    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:23.858488    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:23.869852    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:23.891247    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:23.932664    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:24.014161    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:24.175481    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:24.497225    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:25.138918    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:25.392579    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:25.398907    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:25.410301    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:25.431671    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:25.473060    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:25.554512    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:25.716112    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:26.037772    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003962396s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sndp5" [64591962-d4dc-4736-bf59-225893e09447] Running
E1124 09:32:26.420659    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:26.679219    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:27.960965    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:28.982150    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:29.593022    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/custom-flannel-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:30.020993    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/auto-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:32:30.523319    9243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/enable-default-cni-949664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003971782s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-673346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-673346 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1124 09:32:31.394762    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:32:31.541462    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:32:31.689794    9243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.68s)

                                                
                                    

Test skip (33/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.06
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
373 TestNetworkPlugins/group/kubenet 3.88
381 TestNetworkPlugins/group/cilium 4.32
387 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1124 08:28:46.559316    9243 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1124 08:28:46.608280    9243 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
W1124 08:28:46.622581    9243 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-949664 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-949664" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:23:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-362724
contexts:
- context:
cluster: cert-expiration-362724
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:23:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-362724
name: cert-expiration-362724
current-context: cert-expiration-362724
kind: Config
users:
- name: cert-expiration-362724
user:
client-certificate: /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/cert-expiration-362724/client.crt
client-key: /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/cert-expiration-362724/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-949664

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949664"

                                                
                                                
----------------------- debugLogs end: kubenet-949664 [took: 3.693990705s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-949664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-949664
--- SKIP: TestNetworkPlugins/group/kubenet (3.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-949664 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-949664" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-5690/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:23:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-362724
contexts:
- context:
cluster: cert-expiration-362724
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:23:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-362724
name: cert-expiration-362724
current-context: cert-expiration-362724
kind: Config
users:
- name: cert-expiration-362724
user:
client-certificate: /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/cert-expiration-362724/client.crt
client-key: /home/jenkins/minikube-integration/21978-5690/.minikube/profiles/cert-expiration-362724/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-949664

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-949664" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949664"

                                                
                                                
----------------------- debugLogs end: cilium-949664 [took: 4.117972048s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-949664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-949664
--- SKIP: TestNetworkPlugins/group/cilium (4.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-626367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-626367
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard